From e0ne at e0ne.info Sun Jul 1 12:08:52 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Sun, 1 Jul 2018 15:08:52 +0300 Subject: [openstack-dev] [Openstack] Cinder volume Troubleshoot In-Reply-To: References: Message-ID: Hi Kevin, Do you have any errors or tracebacks in /var/log/cinder/cinder-volume.log? Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Sun, Jul 1, 2018 at 12:02 PM, Kevin Kwon wrote: > > Dear All! > > > would you please let me know how can troubleshoot below case? > > i don't know why below Storage server is down. > > Please help to figure it out.. > > > root at OpenStack-Controller:~# openstack volume service list > +------------------+-----------------------+------+--------- > +-------+----------------------------+ > | Binary | Host | Zone | Status | State | > Updated At | > +------------------+-----------------------+------+--------- > +-------+----------------------------+ > | cinder-scheduler | OpenStack-Controller | nova | enabled | up | > 2018-07-01T08:58:19.000000 | > | cinder-volume | OpenStack-Storage at lvm | nova | enabled | down | > 2018-07-01T07:28:59.000000 | > +------------------+-----------------------+------+--------- > +-------+----------------------------+ > root at OpenStack-Controller:~# > > > Kevin > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Sun Jul 1 13:00:46 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Sun, 1 Jul 2018 08:00:46 -0500 Subject: [openstack-dev] [Openstack] Cinder volume Troubleshoot In-Reply-To: References: Message-ID: Kevin, Just a note thatyou may need to look way back in the logs to find the cause as there m as y be many periodic job failures filling the logs. Jay On Sun, Jul 1, 2018, 7:09 AM Ivan Kolodyazhny wrote: > Hi Kevin, > > Do you have any errors or tracebacks in /var/log/cinder/cinder-volume.log? > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Sun, Jul 1, 2018 at 12:02 PM, Kevin Kwon wrote: > >> >> Dear All! >> >> >> would you please let me know how can troubleshoot below case? >> >> i don't know why below Storage server is down. >> >> Please help to figure it out.. >> >> >> root at OpenStack-Controller:~# openstack volume service list >> >> +------------------+-----------------------+------+---------+-------+----------------------------+ >> | Binary | Host | Zone | Status | State | >> Updated At | >> >> +------------------+-----------------------+------+---------+-------+----------------------------+ >> | cinder-scheduler | OpenStack-Controller | nova | enabled | up | >> 2018-07-01T08:58:19.000000 | >> | cinder-volume | OpenStack-Storage at lvm | nova | enabled | down | >> 2018-07-01T07:28:59.000000 | >> >> +------------------+-----------------------+------+---------+-------+----------------------------+ >> root at OpenStack-Controller:~# >> >> >> Kevin >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Sun Jul 1 14:48:29 2018 From: abishop at redhat.com (Alan Bishop) Date: Sun, 1 Jul 2018 10:48:29 -0400 Subject: [openstack-dev] [Openstack] Cinder volume Troubleshoot In-Reply-To: References: Message-ID: On Sun, Jul 1, 2018 at 9:02 AM Jay Bryant wrote: > Kevin, > > Just a note thatyou may need to look way back in the logs to find the > cause as there m as y be many periodic job failures filling the logs. > > Jay > > On Sun, Jul 1, 2018, 7:09 AM Ivan Kolodyazhny wrote: > >> Hi Kevin, >> >> Do you have any errors or tracebacks in /var/log/cinder/cinder-volume.log? >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> On Sun, Jul 1, 2018 at 12:02 PM, Kevin Kwon wrote: >> >>> >>> Dear All! >>> >>> >>> would you please let me know how can troubleshoot below case? >>> >>> i don't know why below Storage server is down. >>> >>> Please help to figure it out.. >>> >>> >>> root at OpenStack-Controller:~# openstack volume service list >>> >>> +------------------+-----------------------+------+---------+-------+----------------------------+ >>> | Binary | Host | Zone | Status | State | >>> Updated At | >>> >>> +------------------+-----------------------+------+---------+-------+----------------------------+ >>> | cinder-scheduler | OpenStack-Controller | nova | enabled | up | >>> 2018-07-01T08:58:19.000000 | >>> | cinder-volume | OpenStack-Storage at lvm | nova | enabled | down | >>> 2018-07-01T07:28:59.000000 | >>> >>> +------------------+-----------------------+------+---------+-------+----------------------------+ >>> >> I don't know what tooling you use to deploy OpenStack, but I notice cinder-volume's backend is named "lvm." Do you know if the LVM backend is configured to use a dedicated block device on the storage node? Or might it be using a loopback device such how tooling like puppet-cinder handles things (see [1])? [1] https://github.com/openstack/puppet-cinder/blob/master/manifests/setup_test_volume.pp#L22 If the LVM backend uses a loopback device, then remember that loopback devices are not automatically restored on reboot. A common failure scenario is the storage node reboots, and the cinder-volume LVM backend does not come up because the LVM loopback device isn't restored. Alan > root at OpenStack-Controller:~# >>> >>> >>> Kevin >>> >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Jul 2 03:08:51 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 2 Jul 2018 11:08:51 +0800 Subject: [openstack-dev] [nova] about filter the flavor Message-ID: Hi,all I have an idea.Now we can't filter the special flavor according to the property.Can we achieve it?If we achieved this,we can filter the flavor according the property's key and value to filter the flavor. What do you think of the idea?Can you tell me more about this ?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Mon Jul 2 05:17:41 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 2 Jul 2018 15:17:41 +1000 Subject: [openstack-dev] [neutron][graphql] PoC with Oslo integration Message-ID: Hi, We now have an initial base for using GraphQL [1] as you can see from [2]. What we need now is too use Oslo properly to police the requests. The best way to achieve that would likely to use a similar approach as the pecan hooks which are in place for v2.0. Ultimately some of the code could be share between v2.0 and graphql but that's not a goal or either a priority for now. We need Neutron developers to help with the design and to get this moving in the right direction. I'm scheduling an on-line working session for next week (using either BlueJeans or Google Hangouts)? Please vote on doodle [2] on the best time for you (please understand that we have to cover all time zones). Thanks, Gilles [1] https://storyboard.openstack.org/#!/story/2002782 [2] https://review.openstack.org/#/c/575898/ [3] https://doodle.com/poll/43kx8nfpe6w6pvia From superuser151093 at gmail.com Mon Jul 2 05:27:17 2018 From: superuser151093 at gmail.com (super user) Date: Mon, 2 Jul 2018 14:27:17 +0900 Subject: [openstack-dev] [devstack-dev][swift] Limitations of Erasure Coding in Swift Message-ID: Hello everybody, I would like to ask about the limitations of Erasure Coding in Swift right now. What can we do to overcome these limitations? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sferdjao at redhat.com Mon Jul 2 07:20:03 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Mon, 2 Jul 2018 09:20:03 +0200 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: References: Message-ID: <20180702072003.GA3755@redhat> On Mon, Jul 02, 2018 at 11:08:51AM +0800, Rambo wrote: > Hi,all > > I have an idea.Now we can't filter the special flavor according to > the property.Can we achieve it?If we achieved this,we can filter the > flavor according the property's key and value to filter the > flavor. What do you think of the idea?Can you tell me more about > this ?Thank you very much. Is that not the aim of AggregateTypeAffinityFilter and/or AggregateInstanceExtraSpecFilter? Based on flavor or flavor properties the instances can only be scheduled on a specific set of hosts. https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/type_filter.py https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/aggregate_instance_extra_specs.py Thanks, s. > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lijie at unitedstack.com Mon Jul 2 07:43:00 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Mon, 2 Jul 2018 15:43:00 +0800 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: <20180702072003.GA3755@redhat> References: <20180702072003.GA3755@redhat> Message-ID: Oh,sorry,not this means,in my opinion,we could filter the flavor in flavor list.such as the cli:openstack flavor list --property key:value. ------------------ Original ------------------ From: "Sahid Orentino Ferdjaoui"; Date: 2018年7月2日(星期一) 下午3:20 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [nova] about filter the flavor On Mon, Jul 02, 2018 at 11:08:51AM +0800, Rambo wrote: > Hi,all > > I have an idea.Now we can't filter the special flavor according to > the property.Can we achieve it?If we achieved this,we can filter the > flavor according the property's key and value to filter the > flavor. What do you think of the idea?Can you tell me more about > this ?Thank you very much. Is that not the aim of AggregateTypeAffinityFilter and/or AggregateInstanceExtraSpecFilter? Based on flavor or flavor properties the instances can only be scheduled on a specific set of hosts. https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/type_filter.py https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/aggregate_instance_extra_specs.py Thanks, s. > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Mon Jul 2 07:47:22 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 2 Jul 2018 15:47:22 +0800 Subject: [openstack-dev] [nova] Continuously growing request_specs table Message-ID: Hi, It seems that the current request_specs record did not got removed even when the related instance is gone, which lead to a continuously growing request_specs table. How is that so? Is it because the delete process could be error and we have to recover the request_spec if we deleted it? How about adding a nova-manage CLI command for operators to clean up out-dated request specs records from the table by comparing the request specs and existence of related instance? BR, Kevin Zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Jul 2 08:39:21 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 2 Jul 2018 09:39:21 +0100 Subject: [openstack-dev] [mistral] Mistral Monthly July 2018 Message-ID: Hey Mistralites! Here is your monthly recap of whats what in the Mistral community. Arriving to you a day late as the 1st was a Sunday. When that happens I'll just aim to send it as close to the 1st as I can. Either slightly early or slightly late. # General News Vitalii Solodilov joined the Mistral core team. He has been contributing regularly with high quality patches and reviews for a while now. Welcome aboard! # Releases No releases this month. Rocky-3 is at the end of July, so we will see more release activity this month. # Notable Changes and Additions - The action-execution-reporting blueprint was completed. This work sees a heatbeat used to check that action executions are still running. If they have stopped they will be closed. Previously they would be stuck in the RUNNING state. - A number of configuration options were added to change settings in the YAQL engine. # Milestones, Reviews, Bugs and Blueprints - 26 commits and 222 reviews - 105 Open bugs (no change from last month). - Rocky-3 numbers: Blueprints: 1 Unknown, 4 Not started, 3 Started, 1 Slow progress, 2 Implemented Bugs: 2 Incomplete, 2 Invalid, 16 Confirmed, 7 Triaged, 13 In Progress, 3 Fix Released That's all I have for this month! We have lots to do for Rocky-3, so back to work! :-) Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Jul 2 09:38:05 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 2 Jul 2018 10:38:05 +0100 Subject: [openstack-dev] [mistral][ptl] PTL On Vacation 3rd - 6th July Message-ID: Hey all, I'll be out for the rest of the week after today. I don't anticipate anything coming up but Renat Akhmerov is standing in as PTL while I'm out. See you all on Monday next week. Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Mon Jul 2 10:02:31 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 2 Jul 2018 19:02:31 +0900 Subject: [openstack-dev] [taas] LP project changes Message-ID: hi, I created a LP team "tap-as-a-service-drivers", whose initial members are same as the existing tap-as-a-service-core group on gerrit. I made the team the Maintainer and Driver of the tap-as-a-service project. This way, someone in the team can take it over even if I disappeared suddenly. :-) From gergely.csatari at nokia.com Mon Jul 2 11:15:32 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 2 Jul 2018 11:15:32 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: <0B139046-4F69-452E-B390-C756543EA270@windriver.com> References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> <0B139046-4F69-452E-B390-C756543EA270@windriver.com> Message-ID: Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 29, 2018 4:25 AM In-lined comments / questions below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 28, 2018 at 3:35 AM Hi, I’ve added the following pros and cons to the different options: * One Glance with multiple backends [1] [Greg] I’m not sure I understand this option. Is each Glance Backend completely independent ? e.g. when I do a “glance image-create ...” am I specifying a backend and that’s where the image is to be stored ? This is what I was originally thinking. So I was thinking that synchronization of images to Edge Clouds is simply done by doing “glance image-create ...” to the appropriate backends. But then you say “The syncronisation of the image data is the responsibility of the backend (eg.: CEPH).” ... which makes it sound like my thinking above is wrong and the Backends are NOT completely independent, but instead in some sort of replication configuration ... is this leveraging ceph replication factor or something (for example) ? [G0]: According to my understanding the backends are in a replication configuration in this case. Jokke, am I right? * Pros: * Relatively easy to implement based on the current Glance architecture * Cons: * Requires the same Glance backend in every edge cloud instance * Requires the same OpenStack version in every edge cloud instance (apart from during upgrade) * Sensitivity for network connection loss is not clear [Greg] I could be wrong, but even though the OpenStack services in the edge clouds are using the images in their glance backend with a direct URL, I think the OpenStack services (e.g. nova) still need to get the direct URL via the Glance API which is ONLY available at the central site. So don’t think this option supports autonomy of edge Subcloud when connectivity is lost to central site. [G0]: Can’t the url point to the local Glance backend somehow? * Several Glances with an independent syncronisation service, sych via Glance API [2] * Pros: * Every edge cloud instance can have a different Glance backend * Can support multiple OpenStack versions in the different edge cloud instances * Can be extended to support multiple VIM types * Cons: * Needs a new synchronisation service [Greg] Don’t believe this is a big con ... suspect we are going to need this new synchronization service for synchronizing resources of a number of other openstack services ... not just glance. [G0]: I agree, it is not a big con, but it is a con 😊 Should I add some note saying, that a synch service is most probably needed anyway? * Several Glances with an independent syncronisation service, synch using the backend [3] [Greg] This option seems a little odd to me. We are synching the GLANCE DB via some new synchronization service, but synching the Images themselves via the backend ... I think that would be tricky to ensure consistency. [G0]: Yes, there is a place for errors here. * Pros: * I could not find any * Cons: * Needs a new synchronisation service * One Glance and multiple Glance API servers [4] * Pros: * Implicitly location aware * Cons: * First usage of an image always takes a long time * In case of network connection error to the central Galnce Nova will have access to the images, but will not be able to figure out if the user have rights to use the image and will not have path to the images data [Greg] Yeah we tripped over the issue that although the Glance API can cache the image itself, it does NOT cache the image meta data (which I am guessing has info like “user access” etc.) ... so this option improves latency of access to image itself but does NOT provide autonomy. We plan on looking at options to resolve this, as we like the “implicit location awareness” of this option ... and believe it is an option that some customers will like. If anyone has any ideas ? Are these correct? Do I miss anything? Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API [3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend [4]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_and_multiple_Glance_API_servers From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Monday, June 11, 2018 4:29 PM To: Waines, Greg >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Thanks for the comments. I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 8, 2018 1:46 PM To: Csatari, Gergely (Nokia - HU/Budapest) >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: * What is the API between Glance and the Glance backends? * How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? * Is it possible to have different OpenStack versions in the different cloud instances? * Can a cloud instance use the locally synchronised images in case of a network connection break? * Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: * If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon Jul 2 12:07:22 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 02 Jul 2018 08:07:22 -0400 Subject: [openstack-dev] [barbican] default devstack barbican secret store ? and big picture question ? In-Reply-To: References: Message-ID: <1530533242.7835.27.camel@redhat.com> On Mon, 2018-06-18 at 17:23 +0000, Waines, Greg wrote: > Hey ... a couple of NEWBY question for the Barbican Team. > > I just setup a devstack with Barbican @ stable/queens . > > Ran through the “Verify operation” commands ( > https://docs.openstack.org/barbican/latest/install/verify.html ) ... > Everything worked. > stack at barbican:~/devstack$ openstack secret list > > stack at barbican:~/devstack$ openstack secret store --name mysecret -- > payload j4=]d21 > +---------------+-------------------------------------------------- > ------------------------------+ > | Field | Value > | > +---------------+-------------------------------------------------- > ------------------------------+ > | Secret href | http://10.10.10.17/key-manager/v1/secrets/87eb0f18- > e417-45a8-ae49-187f8d8c98d1 | > | Name | mysecret > | > | Created | None > | > | Status | None > | > | Content types | None > | > | Algorithm | aes > | > | Bit length | 256 > | > | Secret type | opaque > | > | Mode | cbc > | > | Expiration | None > | > +---------------+-------------------------------------------------- > ------------------------------+ > stack at barbican:~/devstack$ > stack at barbican:~/devstack$ > stack at barbican:~/devstack$ openstack secret list > +------------------------------------------------------------------ > --------------+----------+---------------------------+--------+---- > -------------------------+-----------+------------+-------------+-- > ----+------------+ > | Secret href > | Name | Created | Status | Content > types | Algorithm | Bit length | Secret type | Mode | > Expiration | > +------------------------------------------------------------------ > --------------+----------+---------------------------+--------+---- > -------------------------+-----------+------------+-------------+-- > ----+------------+ > | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49- > 187f8d8c98d1 | mysecret | 2018-06-18T14:47:45+00:00 | ACTIVE | > {u'default': u'text/plain'} | aes | 256 | opaque | > cbc | None | > +------------------------------------------------------------------ > --------------+----------+---------------------------+--------+---- > -------------------------+-----------+------------+-------------+-- > ----+------------+ > stack at barbican:~/devstack$ openstack secret get > http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49- > 187f8d8c98d1 > +---------------+-------------------------------------------------- > ------------------------------+ > | Field | Value > | > +---------------+-------------------------------------------------- > ------------------------------+ > | Secret href | http://10.10.10.17/key-manager/v1/secrets/87eb0f18- > e417-45a8-ae49-187f8d8c98d1 | > | Name | mysecret > | > | Created | 2018-06-18T14:47:45+00:00 > | > | Status | ACTIVE > | > | Content types | {u'default': u'text/plain'} > | > | Algorithm | aes > | > | Bit length | 256 > | > | Secret type | opaque > | > | Mode | cbc > | > | Expiration | None > | > +---------------+-------------------------------------------------- > ------------------------------+ > stack at barbican:~/devstack$ openstack secret get > http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49- > 187f8d8c98d1 --payload > +---------+---------+ > | Field | Value | > +---------+---------+ > | Payload | j4=]d21 | > +---------+---------+ > stack at barbican:~/devstack$ > > > QUESTIONS: > · In this basic devstack setup, what is being used as the > secret store ? In the basic devstack setup, we use the default secret store plugin which is the SimpleCrypto plugin. This encrypts the secrets using a symmetric key, and stores the results in the barbican sql database. The default encryption key can be seen in https://github.com/openstack/ barbican/blob/master/barbican/plugin/crypto/simple_crypto.py#L37 > o E.g. /etc/barbican/barbican.conf for devstack is simply > stack at barbican:~/devstack$ more /etc/barbican/barbican.conf > > [DEFAULT] > transport_url = rabbit://stackrabbit:admin at 10.10.10.17:5672 > db_auto_create = False > sql_connection = > mysql+pymysql://root:admin at 127.0.0.1/barbican?charset=utf8 > logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE > %(name)s %(instance)s > logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s > %(pathname)s:%(lineno)d > logging_default_format_string = %(asctime)s.%(msecs)03d > %(color)s%(levelname)s %(name)s [-%(color)s] > %(instance)s%(color)s%(message)s > logging_context_format_string = %(asctime)s.%(msecs)03d > %(color)s%(levelname)s %(name)s [%(request_id)s %(project_name)s > %(user_name)s%(color)s] %(instance)s%(color)s%(message)s > use_stderr = True > log_file = /opt/stack/logs/barbican.log > host_href = http://10.10.10.17/key-manager > debug = True > > [keystone_authtoken] > memcached_servers = localhost:11211 > signing_dir = /var/cache/barbican > cafile = /opt/stack/data/ca-bundle.pem > project_domain_name = Default > project_name = service > user_domain_name = Default > password = admin > username = barbican > auth_url = http://10.10.10.17/identity > auth_type = password > > [keystone_notifications] > enable = True > stack at barbican:~/devstack$ > > > What is the basic strategy here wrt Barbican providing secure secret > storage ? > e.g. > Secrets are stored encrypted in some secret store ? > Again, for default devstack, what is that secret store ? (assuming > it is NOT the DB being used for general openstack services’ tables) > i.e. assuming it is separate DB or file or directory of files See response above. In the basic devstack case, the secrets are encrypted by the encryption key (kek) and stored in the barbican sql database. Barbican has a number of gates where we configure different secret stores (including KMIP, Dogtag and Vault). Depending on the secret store, the KEK and secret may be stored in different places. > What key is used for encryption ? ... > > The UUID of the Barbican ‘secret’ object in the Barbican openstack DB > Table is the ‘external reference’ for the secret ? > ? and this ‘secret’ object has the internal reference for the secret > in the secret store ? > > Each secret stored in barbican has an entry in the barbican DB secrets table. This is the UUID in the "external reference". For the SimpleCryptoPlugin, the secret payload is also stored encrypted in the DB (in a separate table). For different secret store plugins esp. the KMIP, Dogtag or Vault plugins, where the secret payload in stored in a separate system, the secret store entry will store the 'internal' secret reference to allow Barbican to retrieve the secret from Dogtag/Vault/ KMIP device. > ADMIN privileges are required to access the Barbican ‘secret’ objects > ? > In the basic devstack case using SimpleCrypto, the secrets are stored encrypted in the DB. The DB is supposed to be accessed only through the Barbican API, which enforces oslo.policy according to policy.json file. Typically, that means being able to access a secret if you are a user within the same project. > For dev > > > Soooo ... the secrets are stored in encrypted format and can only be > referenced / retrieved in plain text with ADMIN privileges > Is this the basis of the strategy ? > No, secrets are stored encrypted ans can be obtained unencrypted through the Barbican REST API with the right keystone permissions. > > Thanks in advance, > Greg. > > > > > > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon Jul 2 12:39:07 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 02 Jul 2018 08:39:07 -0400 Subject: [openstack-dev] [barbican][heat] Identifying secrets in Barbican In-Reply-To: References: <78c1cd708b9a9992b96dc56033dc8c5ed74fc658.camel@redhat.com> Message-ID: <1530535147.7835.30.camel@redhat.com> On Thu, 2018-06-28 at 17:32 -0400, Zane Bitter wrote: > On 28/06/18 15:00, Douglas Mendizabal wrote: > > Replying inline. > > [snip] > > IIRC, using URIs instead of UUIDs was a federation pre-optimization > > done many years ago when Barbican was brand new and we knew we > > wanted > > federation but had no idea how it would work. The rationale was > > that > > the URI would contain both the ID of the secret as well as the > > location > > of where it was stored. > > > > In retrospect, that was a terrible idea, and using UUIDs for > > consistency with the rest of OpenStack would have been a better > > choice. > > I've added a story to the python-barbicanclient storyboard to > > enable > > usage of UUIDs instead of URLs: > > > > https://storyboard.openstack.org/#!/story/2002754 > > Cool, thanks for clearing that up. If UUID is going to become the/a > standard way to reference stuff in the future then we'll just use > the > UUID for the property value. > > > I'm sure you've noticed, but the URI that identifies the secret > > includes the UUID that Barbican uses to identify the secret > > internally: > > > > http://{barbican-host}:9311/v1/secrets/{UUID} > > > > So you don't actually need to store the URI, since it can be > > reconstructed by just saving the UUID and then using whatever URL > > Barbican has in the service catalog. > > > > > > > > In a tangentially related question, since secrets are immutable > > > once > > > they've been uploaded, what's the best way to handle a case where > > > you > > > need to rotate a secret without causing a temporary condition > > > where > > > there is no version of the secret available? (The fact that > > > there's > > > no > > > way to do this for Nova keypairs is a perpetual problem for > > > people, > > > and > > > I'd anticipate similar use cases for Barbican.) I'm going to > > > guess > > > it's: > > > > > > * Create a new secret with the same name > > > * GET /v1/secrets/?name=&sort=created:desc&limit=1 to find > > > out > > > the > > > URL for the newest secret with that name > > > * Use that URL when accessing the secret > > > * Once the new secret is created, delete the old one > > > > > > Should this, or whatever the actual recommended way of doing it > > > is, > > > be > > > baked in to the client somehow so that not every user needs to > > > reimplement it? > > > > > > > When you store a secret (e.g. using POST /v1/secrets), the response > > includes the URI both in the JSON body and in the Location: header. > > > > There is no need for you to mess around with searching by name, > > since > > Barbican does not use the name to identify a secret. You should > > just > > save the URI (or UUID) from the response, and then update the > > resource > > using the old secret to point to the new secret instead. > > Sometimes user will want to be able to rotate secrets without > updating > all of the places that they're referenced from though. > The way you've described seems like the easiest way to do this, and I agree that this seems like a reasonable and common use case for the client. I've added https://storyboard.openstack.org/#!/story/2002786 . > cheers, > Zane. > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Mon Jul 2 13:22:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 2 Jul 2018 14:22:12 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> , <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> Message-ID: On Thu, 28 Jun 2018, Fox, Kevin M wrote: > I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: > * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) > * focus on the commons first. > * simplify the architecture for ops: > * make as much as possible stateless and centralize remaining state. > * stop moving config options around with every release. Make it promote automatically and persist it somewhere. > * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds > * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) > * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. > * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. These are ideas worth thinking about. We may not be able to do them (unclear) but they are stimulating and interesting and we need to keep the converstaion going. Thank you. I referenced this thread from a blog post I just made https://anticdent.org/some-opinions-on-openstack.html which is just a bunch of random ideas on tweaking OpenStack in the face of growth and change. It's quite likely it's junk, but there may be something useful to extract as we try to achieve some focus. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Mon Jul 2 14:28:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 2 Jul 2018 09:28:48 -0500 Subject: [openstack-dev] [nova] Continuously growing request_specs table In-Reply-To: References: Message-ID: <986ac982-0638-721f-922c-d0843ba78ce1@gmail.com> On 7/2/2018 2:47 AM, Zhenyu Zheng wrote: > It seems that the current request_specs record did not got removed even > when the related instance is gone, which lead to a continuously growing > request_specs table. How is that so? > > Is it because the delete process could be error and we have to recover > the request_spec if we deleted it? > > How about adding a nova-manage CLI command for operators to clean up > out-dated request specs records from the table by comparing the request > specs and existence of related instance? Already fixed in Rocky: https://review.openstack.org/#/c/515034/ -- Thanks, Matt From mriedemos at gmail.com Mon Jul 2 14:36:35 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 2 Jul 2018 09:36:35 -0500 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: References: <20180702072003.GA3755@redhat> Message-ID: On 7/2/2018 2:43 AM, 李杰 wrote: > Oh,sorry,not this means,in my opinion,we could filter the flavor in > flavor list.such as the cli:openstack flavor list --property key:value. There is no support for natively filtering flavors by extra specs in the compute REST API so that would have to be added with a microversion (if we wanted to add that support). So it would require a nova spec, which would be reviewed for consideration at the earliest in the Stein release. OSC could do client-side filtering if it wanted. -- Thanks, Matt From balazs.gibizer at ericsson.com Mon Jul 2 15:22:39 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 02 Jul 2018 17:22:39 +0200 Subject: [openstack-dev] [nova]Notification update week 27 Message-ID: <1530544959.27152.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Medium] Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields https://bugs.launchpad.net/nova/+bug/1739325 This bug is still open and reportedly visible in multiple independent environment but I failed to find the root cause. So I'm wondering if we can implement a nova-manage heal-instance-flavor command for these environments. [Medium] Missing versioned notification examples in Python 3 https://bugs.launchpad.net/nova/+bug/1779606 Fix proposed and merged https://review.openstack.org/#/c/579436/ Features -------- Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications I'm +2 on the implementation in https://review.openstack.org/#/c/536243 Weekly meeting -------------- The next meeting is planned to be held on 3rd of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180703T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Jul 2 16:50:26 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 2 Jul 2018 13:50:26 -0300 Subject: [openstack-dev] [sahara][ptg] Sahara schedule Message-ID: Hi Saharans, as previously discussed, we are scheduled for Monday and Tuesday at the PTG in Denver. I would like to hear from folks who are planning to be there which days works best for you. Options are, Monday and Tuesday or Tuesday and Wednesday. Keep in mind that I can't guarantee a switch, I can only propose to the organizers and see what we can do. Thanks all, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Jul 2 17:48:39 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 2 Jul 2018 13:48:39 -0400 Subject: [openstack-dev] [sahara][ptg] Sahara schedule In-Reply-To: References: Message-ID: Tuesday+Wednesday positive: gives time on Monday for the API SIG (I personally would like to be there) and the Ask-me-anything/goal help room Tuesday+Wednesday negative: less time for Luigi (if he is at PTG) to do QA things (but QA will also be there on Thursday) Tuesday+Wednesday negative: the further we go into the week, the more there is a risk that I am needed back at school (although from what I can see now this will not be a problem) Basically you can pick whatever days, and then I will make it work. I don't want to be accountable for such a decision. On Mon, Jul 2, 2018 at 12:50 PM, Telles Nobrega wrote: > Hi Saharans, > > as previously discussed, we are scheduled for Monday and Tuesday at the PTG > in Denver. I would like to hear from folks who are planning to be there > which days works best for you. Options are, Monday and Tuesday or Tuesday > and Wednesday. > > Keep in mind that I can't guarantee a switch, I can only propose to the > organizers and see what we can do. > > Thanks all, > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Mon Jul 2 18:41:42 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 2 Jul 2018 13:41:42 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> Message-ID: <47f04147-d83d-4766-15e5-4e8e6e7c3a82@gmail.com> On 06/28/2018 02:09 PM, Fox, Kevin M wrote: > I'll weigh in a bit with my operator hat on as recent experience it pertains to the current conversation.... > > Kubernetes has largely succeeded in common distribution tools where OpenStack has not been able to. > kubeadm was created as a way to centralize deployment best practices, config, and upgrade stuff into a common code based that other deployment tools can build on. > > I think this has been successful for a few reasons: > * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating its own dogfood) > * was willing to make their api robust enough to handle that self enhancement. (secrets are a thing, orchestration is not optional, etc) > * they decided to produce a reference product (very important to adoption IMO. You don't have to "build from source" to kick the tires.) > * made the barrier to testing/development as low as 'curl http://......minikube; minikube start' (this spurs adoption and contribution) > * not having large silo's in deployment projects allowed better communication on common tooling. > * Operator focused architecture, not project based architecture. This simplifies the deployment situation greatly. > * try whenever possible to focus on just the commons and push vendor specific needs to plugins so vendors can deal with vendor issues directly and not corrupt the core. > > I've upgraded many OpenStacks since Essex and usually it is multiple weeks of prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, something breaks only on the production system and needs hot patching on the spot. About 10% of the time, I've had to write the patch personally. > > I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, what did I have to do? A couple hours of looking at release notes and trying to dig up examples of where things broke for others. Nothing popped up. Then: > > on the controller, I ran: > yum install -y kubeadm #get the newest kubeadm > kubeadm upgrade plan #check things out > > It told me I had 2 choices. I could: > * kubeadm upgrade v1.9.8 > * kubeadm upgrade v1.10.5 > > I ran: > kubeadm upgrade v1.10.5 > > The control plane was down for under 60 seconds and then the cluster was upgraded. The rest of the services did a rolling upgrade live and took a few more minutes. > > I can take my time to upgrade kubelets as mixed kubelet versions works well. > > Upgrading kubelet is about as easy. > > Done. > > There's a lot of things to learn from the governance / architecture of Kubernetes.. > > Fundamentally, there isn't huge differences in what Kubernetes and OpenStack tries to provide users. Scheduling a VM or a Container via an api with some kind of networking and storage is the same kind of thing in either case. > > The how to get the software (openstack or k8s) running is about as polar opposite you can get though. > > I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: > * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) > * focus on the commons first. Nearly all the work we're been doing from an identity perspective over the last 18 months has enabled or directly improved the commons (or what I would consider the commons). I agree that it's important, but we're already focusing on it to the point where we're out of bandwidth. Is the problem that it doesn't appear that way? Do we have different ideas of what the "commons" are? > * simplify the architecture for ops: > * make as much as possible stateless and centralize remaining state. > * stop moving config options around with every release. Make it promote automatically and persist it somewhere. > * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds > * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) > * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. > * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. > > And I know its hard to talk about, but consider just adopting k8s as the commons and build on top of it. OpenStack's api's are good. The implementations right now are very very heavy for ops. You could tie in K8s's pod scheduler with vm stuff running in containers and get a vastly simpler architecture for operators to deal with. Yes, this would be a major disruptive change to OpenStack. But long term, I think it would make for a much healthier OpenStack. > > Thanks, > Kevin > ________________________________________ > From: Zane Bitter [zbitter at redhat.com] > Sent: Wednesday, June 27, 2018 4:23 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 27/06/18 07:55, Jay Pipes wrote: >> WARNING: >> >> Danger, Will Robinson! Strong opinions ahead! > I'd have been disappointed with anything less :) > >> On 06/26/2018 10:00 PM, Zane Bitter wrote: >>> On 26/06/18 09:12, Jay Pipes wrote: >>>> Is (one of) the problem(s) with our community that we have too small >>>> of a scope/footprint? No. Not in the slightest. >>> Incidentally, this is an interesting/amusing example of what we talked >>> about this morning on IRC[1]: you say your concern is that the scope >>> of *Nova* is too big and that you'd be happy to have *more* services >>> in OpenStack if they took the orchestration load off Nova and left it >>> just to handle the 'plumbing' part (which I agree with, while noting >>> that nobody knows how to get there from here); but here you're >>> implying that Kata Containers (something that will clearly have no >>> effect either way on the simplicity or otherwise of Nova) shouldn't be >>> part of the Foundation because it will take focus away from >>> Nova/OpenStack. >> Above, I was saying that the scope of the *OpenStack* community is >> already too broad (IMHO). An example of projects that have made the >> *OpenStack* community too broad are purpose-built telco applications >> like Tacker [1] and Service Function Chaining. [2] >> >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may or >> may not have anything to do with the core OpenStack mission (and have >> more to do with those companies' product roadmap). > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. > > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. > >> On the other hand, my statement that the OpenStack Foundation having 4 >> different focus areas leads to a lack of, well, focus, is a general >> statement on the OpenStack *Foundation* simultaneously expanding its >> sphere of influence while at the same time losing sight of OpenStack >> itself -- and thus the push to create an Open Infrastructure Foundation >> that would be able to compete with the larger mission of the Linux >> Foundation. >> >> [1] This is nothing against Tacker itself. I just don't believe that >> *applications* that are specially built for one particular industry >> belong in the OpenStack set of projects. I had repeatedly stated this on >> Tacker's application to become an OpenStack project, FWIW: >> >> https://review.openstack.org/#/c/276417/ >> >> [2] There is also nothing wrong with service function chains. I just >> don't believe they belong in *OpenStack*. They more appropriately belong >> in the (Open)NFV community because they just are not applicable outside >> of that community's scope and mission. >> >> [3] It's interesting to note that Airship was put into its own >> playground outside the bounds of the OpenStack community (but inside the >> bounds of the OpenStack Foundation). > I wouldn't say it's inside the bounds of the Foundation, and in fact > confusion about that is a large part of why I wrote the blog post. It is > a 100% unofficial project that just happens to be hosted on our infra. > Saying it's inside the bounds of the Foundation is like saying > Kubernetes is inside the bounds of GitHub. > >> Airship is AT&T's specific >> deployment tooling for "the edge!". I actually think this was the >> correct move for this vendor-opinionated deployment tool. >> >>> So to answer your question: >>> >>> zaneb: yeah... nobody I know who argues for a small stable >>> core (in Nova) has ever said there should be fewer higher layer services. >>> zaneb: I'm not entirely sure where you got that idea from. >> Note the emphasis on *Nova* above? >> >> Also note that when I've said that *OpenStack* should have a smaller >> mission and scope, that doesn't mean that higher-level services aren't >> necessary or wanted. > Thank you for saying this, and could I please ask you to repeat this > disclaimer whenever you talk about a smaller scope for OpenStack. > Because for those of us working on higher-level services it feels like > there has been a non-stop chorus (both inside and outside the project) > of people wanting to redefine OpenStack as something that doesn't > include us. > > The reason I haven't dropped this discussion is because I really want to > know if _all_ of those people were actually talking about something else > (e.g. a smaller scope for Nova), or if it's just you. Because you and I > are in complete agreement that Nova has grown a lot of obscure > capabilities that make it fiendishly difficult to maintain, and that in > many cases might never have been requested if we'd had higher-level > tools that could meet the same use cases by composing simpler operations. > > IMHO some of the contributing factors to that were: > > * The aforementioned hostility from some quarters to the existence of > higher-level projects in OpenStack. > * The ongoing hostility of operators to deploying any projects outside > of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the > Barbican vs. Castellan debate, where we can't even correct one of > OpenStack's original sins and bake in a secret store - something k8s > managed from day one - because people don't want to install another ReST > API even over a backend that they'll already have to install anyway). > * The illegibility of public Nova interfaces to potential higher-level > tools. > >> It's just that Nova has been a dumping ground over the past 7+ years for >> features that, looking back, should never have been added to Nova (or at >> least, never added to the Compute API) [4]. >> >> What we were discussing yesterday on IRC was this: >> >> "Which parts of the Compute API should have been implemented in other >> services?" >> >> What we are discussing here is this: >> >> "Which projects in the OpenStack community expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> and, following that: >> >> "What should we do about projects that expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> Note that, clearly, my opinion is that OpenStack's mission should be to >> provide infrastructure as a service projects (both plumbing and porcelain). >> >> This is MHO only. The actual OpenStack mission statement [5] is >> sufficiently vague as to provide no meaningful filtering value for >> determining new entrants to the project ecosystem. > I think this is inevitable, in that if you want to define cloud > computing in a single sentence it will necessarily be very vague. > > That's the reason for pursuing a technical vision statement > (brainstorming for which is how this discussion started), so we can > spell it out in a longer form. > > cheers, > Zane. > >> I *personally* believe that should change in order for the *OpenStack* >> community to have some meaningful definition and differentiation from >> the broader cloud computing, application development, and network >> orchestration ecosystems. >> >> All the best, >> -jay >> >> [4] ... or never brought into the Compute API to begin with. You know, >> vestigial tail and all that. >> >> [5] for reference: "The OpenStack Mission is to produce a ubiquitous >> Open Source Cloud Computing platform that is easy to use, simple to >> implement, interoperable between deployments, works well at all scales, >> and meets the needs of users and operators of both public and private >> clouds." >> >>> I guess from all the people who keep saying it ;) >>> >>> Apparently somebody was saying it a year ago too :D >>> https://twitter.com/zerobanana/status/883052105791156225 >>> >>> cheers, >>> Zane. >>> >>> [1] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lars at redhat.com Mon Jul 2 18:57:49 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 2 Jul 2018 14:57:49 -0400 Subject: [openstack-dev] [Puppet] Requirements for running puppet unit tests? In-Reply-To: <20180629000402.cuf2tpdc4fsagnkk@redhat.com> References: <20180629000402.cuf2tpdc4fsagnkk@redhat.com> Message-ID: On Thu, Jun 28, 2018 at 8:04 PM, Lars Kellogg-Stedman wrote: > What is required to successfully run the rspec tests? On the odd chance that it might be useful to someone else, here's the Docker image I'm using to successfully run the rspec tests for puppet-keystone: https://github.com/larsks/docker-image-rspec Available on docker hub as larsks/rspec. Cheers, -- Lars Kellogg-Stedman -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Mon Jul 2 19:12:21 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Mon, 2 Jul 2018 19:12:21 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <47f04147-d83d-4766-15e5-4e8e6e7c3a82@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov>, <47f04147-d83d-4766-15e5-4e8e6e7c3a82@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C142AA9@EX10MBOX03.pnnl.gov> I think Keystone is one of the exceptions currently, as it is the quintessential common service in all of OpenStack since the rule was made, all things auth belong to Keystone and the other projects don't waver from it. The same can not be said of, say, Barbican. Steps have been made recently to get farther down that path, but still is not there yet. Until it is blessed as a common, required component, other silo's are still disincentivized to depend on it. I think a lot of the pushback around not adding more common/required services is the extra load it puts on ops though. hence these: > * Consider abolishing the project walls. > * simplify the architecture for ops IMO, those need to change to break free from the pushback and make progress on the commons again. Thanks, Kevin ________________________________________ From: Lance Bragstad [lbragstad at gmail.com] Sent: Monday, July 02, 2018 11:41 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 06/28/2018 02:09 PM, Fox, Kevin M wrote: > I'll weigh in a bit with my operator hat on as recent experience it pertains to the current conversation.... > > Kubernetes has largely succeeded in common distribution tools where OpenStack has not been able to. > kubeadm was created as a way to centralize deployment best practices, config, and upgrade stuff into a common code based that other deployment tools can build on. > > I think this has been successful for a few reasons: > * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating its own dogfood) > * was willing to make their api robust enough to handle that self enhancement. (secrets are a thing, orchestration is not optional, etc) > * they decided to produce a reference product (very important to adoption IMO. You don't have to "build from source" to kick the tires.) > * made the barrier to testing/development as low as 'curl http://......minikube; minikube start' (this spurs adoption and contribution) > * not having large silo's in deployment projects allowed better communication on common tooling. > * Operator focused architecture, not project based architecture. This simplifies the deployment situation greatly. > * try whenever possible to focus on just the commons and push vendor specific needs to plugins so vendors can deal with vendor issues directly and not corrupt the core. > > I've upgraded many OpenStacks since Essex and usually it is multiple weeks of prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, something breaks only on the production system and needs hot patching on the spot. About 10% of the time, I've had to write the patch personally. > > I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, what did I have to do? A couple hours of looking at release notes and trying to dig up examples of where things broke for others. Nothing popped up. Then: > > on the controller, I ran: > yum install -y kubeadm #get the newest kubeadm > kubeadm upgrade plan #check things out > > It told me I had 2 choices. I could: > * kubeadm upgrade v1.9.8 > * kubeadm upgrade v1.10.5 > > I ran: > kubeadm upgrade v1.10.5 > > The control plane was down for under 60 seconds and then the cluster was upgraded. The rest of the services did a rolling upgrade live and took a few more minutes. > > I can take my time to upgrade kubelets as mixed kubelet versions works well. > > Upgrading kubelet is about as easy. > > Done. > > There's a lot of things to learn from the governance / architecture of Kubernetes.. > > Fundamentally, there isn't huge differences in what Kubernetes and OpenStack tries to provide users. Scheduling a VM or a Container via an api with some kind of networking and storage is the same kind of thing in either case. > > The how to get the software (openstack or k8s) running is about as polar opposite you can get though. > > I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: > * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) > * focus on the commons first. Nearly all the work we're been doing from an identity perspective over the last 18 months has enabled or directly improved the commons (or what I would consider the commons). I agree that it's important, but we're already focusing on it to the point where we're out of bandwidth. Is the problem that it doesn't appear that way? Do we have different ideas of what the "commons" are? > * simplify the architecture for ops: > * make as much as possible stateless and centralize remaining state. > * stop moving config options around with every release. Make it promote automatically and persist it somewhere. > * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds > * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) > * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. > * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. > > And I know its hard to talk about, but consider just adopting k8s as the commons and build on top of it. OpenStack's api's are good. The implementations right now are very very heavy for ops. You could tie in K8s's pod scheduler with vm stuff running in containers and get a vastly simpler architecture for operators to deal with. Yes, this would be a major disruptive change to OpenStack. But long term, I think it would make for a much healthier OpenStack. > > Thanks, > Kevin > ________________________________________ > From: Zane Bitter [zbitter at redhat.com] > Sent: Wednesday, June 27, 2018 4:23 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 27/06/18 07:55, Jay Pipes wrote: >> WARNING: >> >> Danger, Will Robinson! Strong opinions ahead! > I'd have been disappointed with anything less :) > >> On 06/26/2018 10:00 PM, Zane Bitter wrote: >>> On 26/06/18 09:12, Jay Pipes wrote: >>>> Is (one of) the problem(s) with our community that we have too small >>>> of a scope/footprint? No. Not in the slightest. >>> Incidentally, this is an interesting/amusing example of what we talked >>> about this morning on IRC[1]: you say your concern is that the scope >>> of *Nova* is too big and that you'd be happy to have *more* services >>> in OpenStack if they took the orchestration load off Nova and left it >>> just to handle the 'plumbing' part (which I agree with, while noting >>> that nobody knows how to get there from here); but here you're >>> implying that Kata Containers (something that will clearly have no >>> effect either way on the simplicity or otherwise of Nova) shouldn't be >>> part of the Foundation because it will take focus away from >>> Nova/OpenStack. >> Above, I was saying that the scope of the *OpenStack* community is >> already too broad (IMHO). An example of projects that have made the >> *OpenStack* community too broad are purpose-built telco applications >> like Tacker [1] and Service Function Chaining. [2] >> >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may or >> may not have anything to do with the core OpenStack mission (and have >> more to do with those companies' product roadmap). > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. > > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. > >> On the other hand, my statement that the OpenStack Foundation having 4 >> different focus areas leads to a lack of, well, focus, is a general >> statement on the OpenStack *Foundation* simultaneously expanding its >> sphere of influence while at the same time losing sight of OpenStack >> itself -- and thus the push to create an Open Infrastructure Foundation >> that would be able to compete with the larger mission of the Linux >> Foundation. >> >> [1] This is nothing against Tacker itself. I just don't believe that >> *applications* that are specially built for one particular industry >> belong in the OpenStack set of projects. I had repeatedly stated this on >> Tacker's application to become an OpenStack project, FWIW: >> >> https://review.openstack.org/#/c/276417/ >> >> [2] There is also nothing wrong with service function chains. I just >> don't believe they belong in *OpenStack*. They more appropriately belong >> in the (Open)NFV community because they just are not applicable outside >> of that community's scope and mission. >> >> [3] It's interesting to note that Airship was put into its own >> playground outside the bounds of the OpenStack community (but inside the >> bounds of the OpenStack Foundation). > I wouldn't say it's inside the bounds of the Foundation, and in fact > confusion about that is a large part of why I wrote the blog post. It is > a 100% unofficial project that just happens to be hosted on our infra. > Saying it's inside the bounds of the Foundation is like saying > Kubernetes is inside the bounds of GitHub. > >> Airship is AT&T's specific >> deployment tooling for "the edge!". I actually think this was the >> correct move for this vendor-opinionated deployment tool. >> >>> So to answer your question: >>> >>> zaneb: yeah... nobody I know who argues for a small stable >>> core (in Nova) has ever said there should be fewer higher layer services. >>> zaneb: I'm not entirely sure where you got that idea from. >> Note the emphasis on *Nova* above? >> >> Also note that when I've said that *OpenStack* should have a smaller >> mission and scope, that doesn't mean that higher-level services aren't >> necessary or wanted. > Thank you for saying this, and could I please ask you to repeat this > disclaimer whenever you talk about a smaller scope for OpenStack. > Because for those of us working on higher-level services it feels like > there has been a non-stop chorus (both inside and outside the project) > of people wanting to redefine OpenStack as something that doesn't > include us. > > The reason I haven't dropped this discussion is because I really want to > know if _all_ of those people were actually talking about something else > (e.g. a smaller scope for Nova), or if it's just you. Because you and I > are in complete agreement that Nova has grown a lot of obscure > capabilities that make it fiendishly difficult to maintain, and that in > many cases might never have been requested if we'd had higher-level > tools that could meet the same use cases by composing simpler operations. > > IMHO some of the contributing factors to that were: > > * The aforementioned hostility from some quarters to the existence of > higher-level projects in OpenStack. > * The ongoing hostility of operators to deploying any projects outside > of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the > Barbican vs. Castellan debate, where we can't even correct one of > OpenStack's original sins and bake in a secret store - something k8s > managed from day one - because people don't want to install another ReST > API even over a backend that they'll already have to install anyway). > * The illegibility of public Nova interfaces to potential higher-level > tools. > >> It's just that Nova has been a dumping ground over the past 7+ years for >> features that, looking back, should never have been added to Nova (or at >> least, never added to the Compute API) [4]. >> >> What we were discussing yesterday on IRC was this: >> >> "Which parts of the Compute API should have been implemented in other >> services?" >> >> What we are discussing here is this: >> >> "Which projects in the OpenStack community expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> and, following that: >> >> "What should we do about projects that expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> Note that, clearly, my opinion is that OpenStack's mission should be to >> provide infrastructure as a service projects (both plumbing and porcelain). >> >> This is MHO only. The actual OpenStack mission statement [5] is >> sufficiently vague as to provide no meaningful filtering value for >> determining new entrants to the project ecosystem. > I think this is inevitable, in that if you want to define cloud > computing in a single sentence it will necessarily be very vague. > > That's the reason for pursuing a technical vision statement > (brainstorming for which is how this discussion started), so we can > spell it out in a longer form. > > cheers, > Zane. > >> I *personally* believe that should change in order for the *OpenStack* >> community to have some meaningful definition and differentiation from >> the broader cloud computing, application development, and network >> orchestration ecosystems. >> >> All the best, >> -jay >> >> [4] ... or never brought into the Compute API to begin with. You know, >> vestigial tail and all that. >> >> [5] for reference: "The OpenStack Mission is to produce a ubiquitous >> Open Source Cloud Computing platform that is easy to use, simple to >> implement, interoperable between deployments, works well at all scales, >> and meets the needs of users and operators of both public and private >> clouds." >> >>> I guess from all the people who keep saying it ;) >>> >>> Apparently somebody was saying it a year ago too :D >>> https://twitter.com/zerobanana/status/883052105791156225 >>> >>> cheers, >>> Zane. >>> >>> [1] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Mon Jul 2 19:31:13 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 2 Jul 2018 15:31:13 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> Message-ID: <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> On 28/06/18 15:09, Fox, Kevin M wrote: > I'll weigh in a bit with my operator hat on as recent experience it pertains to the current conversation.... > > Kubernetes has largely succeeded in common distribution tools where OpenStack has not been able to. > kubeadm was created as a way to centralize deployment best practices, config, and upgrade stuff into a common code based that other deployment tools can build on. > > I think this has been successful for a few reasons: > * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating its own dogfood) This is also TripleO's philosophy :) > * was willing to make their api robust enough to handle that self enhancement. (secrets are a thing, orchestration is not optional, etc) I don't even think that self-upgrading was the most important consequence of that. Fundamentally, they understood how applications would use it and made sure that the batteries were included. I think the fact that they conceived it explicitly as an application operation technology made this an obvious choice. I suspect that the reason we've lagged in standardising those things in OpenStack is that there's so many other ways to think of OpenStack before you get to that one. > * they decided to produce a reference product (very important to adoption IMO. You don't have to "build from source" to kick the tires.) > * made the barrier to testing/development as low as 'curl http://......minikube; minikube start' (this spurs adoption and contribution) That's not so different from devstack though. > * not having large silo's in deployment projects allowed better communication on common tooling. > * Operator focused architecture, not project based architecture. This simplifies the deployment situation greatly. > * try whenever possible to focus on just the commons and push vendor specific needs to plugins so vendors can deal with vendor issues directly and not corrupt the core. I agree with all of those, but to be fair to OpenStack, you're leaving out arguably the most important one: * Installation instructions start with "assume a working datacenter" They have that luxury; we do not. (To be clear, they are 100% right to take full advantage of that luxury. Although if there are still folks who go around saying that it's a trivial problem and OpenStackers must all be idiots for making it look so difficult, they should really stop embarrassing themselves.) > I've upgraded many OpenStacks since Essex and usually it is multiple weeks of prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, something breaks only on the production system and needs hot patching on the spot. About 10% of the time, I've had to write the patch personally. > > I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, what did I have to do? A couple hours of looking at release notes and trying to dig up examples of where things broke for others. Nothing popped up. Then: > > on the controller, I ran: > yum install -y kubeadm #get the newest kubeadm > kubeadm upgrade plan #check things out > > It told me I had 2 choices. I could: > * kubeadm upgrade v1.9.8 > * kubeadm upgrade v1.10.5 > > I ran: > kubeadm upgrade v1.10.5 > > The control plane was down for under 60 seconds and then the cluster was upgraded. The rest of the services did a rolling upgrade live and took a few more minutes. > > I can take my time to upgrade kubelets as mixed kubelet versions works well. > > Upgrading kubelet is about as easy. > > Done. > > There's a lot of things to learn from the governance / architecture of Kubernetes.. +1 > Fundamentally, there isn't huge differences in what Kubernetes and OpenStack tries to provide users. Scheduling a VM or a Container via an api with some kind of networking and storage is the same kind of thing in either case. Yes, from a user perspective that is (very) broadly accurate. But again, Kubernetes assumes that somebody else has provided the bottom few layers of implementation, while OpenStack *is* the somebody else. > The how to get the software (openstack or k8s) running is about as polar opposite you can get though. > > I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: > * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) In the spirit of cdent's blog post about random ideas: one idea I keep coming back to (and it's been around for a while, I don't remember who it first came from) is to start treating the compute node as a single project (I guess the k8s equivalent would be a kubelet). Have a single API - commands go in, events come out. Note that this would not include just the compute-node functionality of Nova, Neutron and Cinder, but ultimately also that of Ceilometer, Watcher, Freezer, Masakari (and possibly Congress and Vitrage?) as well. Some of those projects only exist at all because of boundaries between stuff on the compute node, while others are just unnecessarily complicated to add to a deployment because of those boundaries. (See https://julien.danjou.info/lessons-from-openstack-telemetry-incubation/ for some insightful observations on that topic - note that you don't have to agree with all of it to appreciate the point that the balkanisation of the compute node architecture leads to bad design decisions.) In theory doing that should make it easier to build e.g. a cut-down compute API of the kind that Jay was talking about upthread. I know that the short-term costs of making a change like this are going to be high - we aren't even yet at a point where making a stable API for compute drivers has been judged to meet a cost/benefit analysis. But maybe if we can do a comprehensive job of articulating the long-term benefits, we might find that it's still the right thing to do. > * focus on the commons first. > * simplify the architecture for ops: > * make as much as possible stateless and centralize remaining state. > * stop moving config options around with every release. Make it promote automatically and persist it somewhere. > * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds > * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) > * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. We probably actually need two groups: one to think about the architecture of the user experience of OpenStack, and one to think about the internal architecture as a whole. I'd be very enthusiastic about the TC chartering some group to work on this. It has worried me for a long time that there is nobody designing OpenStack as an whole; design is done at the level of individual projects, and OpenStack is an ad-hoc collection of what they produce. Unfortunately we did have an Architecture Working Group for a while (in the sense of the second definition above), and it fizzled out because there weren't enough people with enough time to work on it. Until we can identify at least a theoretical reason why a new effort would be more successful, I don't think there is going to be any appetite for trying again. cheers, Zane. > * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. > > And I know its hard to talk about, but consider just adopting k8s as the commons and build on top of it. OpenStack's api's are good. The implementations right now are very very heavy for ops. You could tie in K8s's pod scheduler with vm stuff running in containers and get a vastly simpler architecture for operators to deal with. Yes, this would be a major disruptive change to OpenStack. But long term, I think it would make for a much healthier OpenStack. > > Thanks, > Kevin > ________________________________________ > From: Zane Bitter [zbitter at redhat.com] > Sent: Wednesday, June 27, 2018 4:23 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 27/06/18 07:55, Jay Pipes wrote: >> WARNING: >> >> Danger, Will Robinson! Strong opinions ahead! > > I'd have been disappointed with anything less :) > >> On 06/26/2018 10:00 PM, Zane Bitter wrote: >>> On 26/06/18 09:12, Jay Pipes wrote: >>>> Is (one of) the problem(s) with our community that we have too small >>>> of a scope/footprint? No. Not in the slightest. >>> >>> Incidentally, this is an interesting/amusing example of what we talked >>> about this morning on IRC[1]: you say your concern is that the scope >>> of *Nova* is too big and that you'd be happy to have *more* services >>> in OpenStack if they took the orchestration load off Nova and left it >>> just to handle the 'plumbing' part (which I agree with, while noting >>> that nobody knows how to get there from here); but here you're >>> implying that Kata Containers (something that will clearly have no >>> effect either way on the simplicity or otherwise of Nova) shouldn't be >>> part of the Foundation because it will take focus away from >>> Nova/OpenStack. >> >> Above, I was saying that the scope of the *OpenStack* community is >> already too broad (IMHO). An example of projects that have made the >> *OpenStack* community too broad are purpose-built telco applications >> like Tacker [1] and Service Function Chaining. [2] >> >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may or >> may not have anything to do with the core OpenStack mission (and have >> more to do with those companies' product roadmap). > > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. > > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. > >> On the other hand, my statement that the OpenStack Foundation having 4 >> different focus areas leads to a lack of, well, focus, is a general >> statement on the OpenStack *Foundation* simultaneously expanding its >> sphere of influence while at the same time losing sight of OpenStack >> itself -- and thus the push to create an Open Infrastructure Foundation >> that would be able to compete with the larger mission of the Linux >> Foundation. >> >> [1] This is nothing against Tacker itself. I just don't believe that >> *applications* that are specially built for one particular industry >> belong in the OpenStack set of projects. I had repeatedly stated this on >> Tacker's application to become an OpenStack project, FWIW: >> >> https://review.openstack.org/#/c/276417/ >> >> [2] There is also nothing wrong with service function chains. I just >> don't believe they belong in *OpenStack*. They more appropriately belong >> in the (Open)NFV community because they just are not applicable outside >> of that community's scope and mission. >> >> [3] It's interesting to note that Airship was put into its own >> playground outside the bounds of the OpenStack community (but inside the >> bounds of the OpenStack Foundation). > > I wouldn't say it's inside the bounds of the Foundation, and in fact > confusion about that is a large part of why I wrote the blog post. It is > a 100% unofficial project that just happens to be hosted on our infra. > Saying it's inside the bounds of the Foundation is like saying > Kubernetes is inside the bounds of GitHub. > >> Airship is AT&T's specific >> deployment tooling for "the edge!". I actually think this was the >> correct move for this vendor-opinionated deployment tool. >> >>> So to answer your question: >>> >>> zaneb: yeah... nobody I know who argues for a small stable >>> core (in Nova) has ever said there should be fewer higher layer services. >>> zaneb: I'm not entirely sure where you got that idea from. >> >> Note the emphasis on *Nova* above? >> >> Also note that when I've said that *OpenStack* should have a smaller >> mission and scope, that doesn't mean that higher-level services aren't >> necessary or wanted. > > Thank you for saying this, and could I please ask you to repeat this > disclaimer whenever you talk about a smaller scope for OpenStack. > Because for those of us working on higher-level services it feels like > there has been a non-stop chorus (both inside and outside the project) > of people wanting to redefine OpenStack as something that doesn't > include us. > > The reason I haven't dropped this discussion is because I really want to > know if _all_ of those people were actually talking about something else > (e.g. a smaller scope for Nova), or if it's just you. Because you and I > are in complete agreement that Nova has grown a lot of obscure > capabilities that make it fiendishly difficult to maintain, and that in > many cases might never have been requested if we'd had higher-level > tools that could meet the same use cases by composing simpler operations. > > IMHO some of the contributing factors to that were: > > * The aforementioned hostility from some quarters to the existence of > higher-level projects in OpenStack. > * The ongoing hostility of operators to deploying any projects outside > of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the > Barbican vs. Castellan debate, where we can't even correct one of > OpenStack's original sins and bake in a secret store - something k8s > managed from day one - because people don't want to install another ReST > API even over a backend that they'll already have to install anyway). > * The illegibility of public Nova interfaces to potential higher-level > tools. > >> It's just that Nova has been a dumping ground over the past 7+ years for >> features that, looking back, should never have been added to Nova (or at >> least, never added to the Compute API) [4]. >> >> What we were discussing yesterday on IRC was this: >> >> "Which parts of the Compute API should have been implemented in other >> services?" >> >> What we are discussing here is this: >> >> "Which projects in the OpenStack community expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> and, following that: >> >> "What should we do about projects that expanded the scope of the >> OpenStack mission beyond infrastructure-as-a-service?" >> >> Note that, clearly, my opinion is that OpenStack's mission should be to >> provide infrastructure as a service projects (both plumbing and porcelain). >> >> This is MHO only. The actual OpenStack mission statement [5] is >> sufficiently vague as to provide no meaningful filtering value for >> determining new entrants to the project ecosystem. > > I think this is inevitable, in that if you want to define cloud > computing in a single sentence it will necessarily be very vague. > > That's the reason for pursuing a technical vision statement > (brainstorming for which is how this discussion started), so we can > spell it out in a longer form. > > cheers, > Zane. > >> I *personally* believe that should change in order for the *OpenStack* >> community to have some meaningful definition and differentiation from >> the broader cloud computing, application development, and network >> orchestration ecosystems. >> >> All the best, >> -jay >> >> [4] ... or never brought into the Compute API to begin with. You know, >> vestigial tail and all that. >> >> [5] for reference: "The OpenStack Mission is to produce a ubiquitous >> Open Source Cloud Computing platform that is easy to use, simple to >> implement, interoperable between deployments, works well at all scales, >> and meets the needs of users and operators of both public and private >> clouds." >> >>> I guess from all the people who keep saying it ;) >>> >>> Apparently somebody was saying it a year ago too :D >>> https://twitter.com/zerobanana/status/883052105791156225 >>> >>> cheers, >>> Zane. >>> >>> [1] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon Jul 2 21:45:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 2 Jul 2018 17:45:22 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C142AA9@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <47f04147-d83d-4766-15e5-4e8e6e7c3a82@gmail.com> <1A3C52DFCD06494D8528644858247BF01C142AA9@EX10MBOX03.pnnl.gov> Message-ID: On 07/02/2018 03:12 PM, Fox, Kevin M wrote: > I think a lot of the pushback around not adding more common/required services is the extra load it puts on ops though. hence these: >> * Consider abolishing the project walls. >> * simplify the architecture for ops > > IMO, those need to change to break free from the pushback and make progress on the commons again. What *specifically* would you do, Kevin? -jay From jaypipes at gmail.com Mon Jul 2 23:13:36 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 2 Jul 2018 19:13:36 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> On 06/27/2018 07:23 PM, Zane Bitter wrote: > On 27/06/18 07:55, Jay Pipes wrote: >> Above, I was saying that the scope of the *OpenStack* community is >> already too broad (IMHO). An example of projects that have made the >> *OpenStack* community too broad are purpose-built telco applications >> like Tacker [1] and Service Function Chaining. [2] >> >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may >> or may not have anything to do with the core OpenStack mission (and >> have more to do with those companies' product roadmap). > > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. You have to have motivation from vendors in order to achieve said single way of installing OpenStack. I gave up a long time ago on distros and vendors to get behind such an effort. Where vendors see $$$, they will attempt to carve out value differentiation. And value differentiation leads to, well, differences, naturally. And, despite what some might misguidedly think, Kubernetes has no single installation method. Their *official* setup/install page is here: https://kubernetes.io/docs/setup/pick-right-solution/ It lists no fewer than *37* (!) different ways of installing Kubernetes, and I'm not even including anything listed in the "Custom Solutions" section. > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. Sure, I agree the separate bucket is useful, particularly when paired with information that allows operators to know how stable and/or bleeding edge the code is expected to be -- you know, those "tags" that the TC spent time curating. >>> So to answer your question: >>> >>> zaneb: yeah... nobody I know who argues for a small stable >>> core (in Nova) has ever said there should be fewer higher layer >>> services. >>> zaneb: I'm not entirely sure where you got that idea from. >> >> Note the emphasis on *Nova* above? >> >> Also note that when I've said that *OpenStack* should have a smaller >> mission and scope, that doesn't mean that higher-level services aren't >> necessary or wanted. > > Thank you for saying this, and could I please ask you to repeat this > disclaimer whenever you talk about a smaller scope for OpenStack. Yes. I shall shout it from the highest mountains. [1] > Because for those of us working on higher-level services it feels like > there has been a non-stop chorus (both inside and outside the project) > of people wanting to redefine OpenStack as something that doesn't > include us. I've said in the past (on Twitter, can't find the link right now, but it's out there somewhere) something to the effect of "at some point, someone just needs to come out and say that OpenStack is, at its core, Nova, Neutron, Keystone, Glance and Cinder". Perhaps this is what you were recollecting. I would use a different phrase nowadays to describe what I was thinking with the above. I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are a definitive lower level of an OpenStack deployment. They represent a set of required integrated services that supply the most basic infrastructure for datacenter resource management when deploying OpenStack." Note the difference in wording. Instead of saying "OpenStack is X", I'm saying "These particular services represent a specific layer of an OpenStack deployment". Nowadays, I would further add something to the effect of "Depending on the particular use cases and workloads the OpenStack deployer wishes to promote, an additional layer of services provides workload orchestration and workflow management capabilities. This layer of services include Heat, Mistral, Tacker, Service Function Chaining, Murano, etc". Does that provide you with some closure on this feeling of "non-stop chorus" of exclusion that you mentioned above? > The reason I haven't dropped this discussion is because I really want to > know if _all_ of those people were actually talking about something else > (e.g. a smaller scope for Nova), or if it's just you. Because you and I > are in complete agreement that Nova has grown a lot of obscure > capabilities that make it fiendishly difficult to maintain, and that in > many cases might never have been requested if we'd had higher-level > tools that could meet the same use cases by composing simpler operations. > > IMHO some of the contributing factors to that were: > > * The aforementioned hostility from some quarters to the existence of > higher-level projects in OpenStack. > * The ongoing hostility of operators to deploying any projects outside > of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the > Barbican vs. Castellan debate, where we can't even correct one of > OpenStack's original sins and bake in a secret store - something k8s > managed from day one - because people don't want to install another ReST > API even over a backend that they'll already have to install anyway). > * The illegibility of public Nova interfaces to potential higher-level > tools. I would like to point something else out here. Something that may not be pleasant to confront. Heat's competition (for resources and mindshare) is Kubernetes, plain and simple. Heat's competition is not other OpenStack projects. Nova's competition is not Kubernetes (despite various people continuing to say that it is). Nova is not an orchestration system. Never was and (as long as I'm kicking and screaming) never will be. Nova's primary competition is: * Stand-alone Ironic * oVirt and stand-alone virsh callers * Parts of VMWare vCenter [3] * MaaS in some respects * The *compute provisioning* parts of EC2, Azure, and GCP This is why there is a Kubernetes OpenStack cloud provider plugin [4]. This plugin uses Nova [5] (which can potentially use Ironic), Cinder, Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes cluster and load balancer objects to act as the proxies that k8s itself uses when deploying Pods and Services. Heat's architecture, template language and object constructs are in direct competition with Kubernetes' API and architecture, with the primary difference being a VM-centric [6] vs. a container-centric object model. Heat's template language is similar to Helm's chart template YAML structure [7], and with Heat's evolution to the "convergence model", Heat's architecture actually got closer to Kubernetes' architecture: that of continually attempting to converge an observed state with a desired state. So, what is Heat to do? The hype and marketing machine is never-ending, I'm afraid. [8] I'm not sure there's actually anything that can be done about this. Perhaps it is a fait accomplis that Kubernetes/Helm will/has become synonymous with "orchestration of things". Perhaps not. I'm not an oracle, unfortunately. Maybe the only thing that Heat can do to fend off the coming doom is to make a case that Heat's performance, reliability, feature set or integration with OpenStack's other services make it a better candidate for orchestrating virtual machine or baremetal workloads on an OpenStack deployment than Kubernetes is. Sorry to be the bearer of bad news, -jay [1] I live in Florida, though, which has no mountains. But, when I visit, say, North Carolina, I shall certainly shout it from their mountains. [2] some would also say Castellan, Ironic and Designate belong here. [3] Though VMWare is still trying to be everything that certain IT administrators ever needed, including orchestration, backup services, block storage pooling, high availability, quota management, etc etc [4] https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack [5] https://github.com/kubernetes/kubernetes/blob/92b81114f43f3ca74988194406957a5d1ffd1c5d/pkg/cloudprovider/providers/openstack/openstack.go#L377 [6] The fact that Heat started as a CloudFormation API clone gave it its VM-centricity. [7] https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/index.md [8] The Kubernetes' machine has essentially decimated all the other "orchestration of things" projects' resources and mindshare, including a number of them that were very well architected, well coded, and well documented: * Mesos with Marathon/Aurora * Rancher * OpenShift (you know, the original, original one...) * Nomad * Docker Swarm/Compose From zhengzhenyulixi at gmail.com Tue Jul 3 01:23:28 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 3 Jul 2018 09:23:28 +0800 Subject: [openstack-dev] [nova] Continuously growing request_specs table In-Reply-To: <986ac982-0638-721f-922c-d0843ba78ce1@gmail.com> References: <986ac982-0638-721f-922c-d0843ba78ce1@gmail.com> Message-ID: Thanks, I may have missed that one. On Mon, Jul 2, 2018 at 10:29 PM Matt Riedemann wrote: > On 7/2/2018 2:47 AM, Zhenyu Zheng wrote: > > It seems that the current request_specs record did not got removed even > > when the related instance is gone, which lead to a continuously growing > > request_specs table. How is that so? > > > > Is it because the delete process could be error and we have to recover > > the request_spec if we deleted it? > > > > How about adding a nova-manage CLI command for operators to clean up > > out-dated request specs records from the table by comparing the request > > specs and existence of related instance? > > Already fixed in Rocky: > > https://review.openstack.org/#/c/515034/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Tue Jul 3 04:12:53 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 3 Jul 2018 13:12:53 +0900 Subject: [openstack-dev] [neutron] autodoc and tox-siblings Message-ID: hi, - networking-midonet uses autodoc in their doc. build-openstack-sphinx-docs runs it. - build-openstack-sphinx-docs doesn't use tox-siblings. thus the job uses released versions of dependencies. eg. neutron, neutron-XXXaas, os-vif, etc - released versions of dependencies and networking-midonet master are not necessarily compatible - a consequence: https://bugs.launchpad.net/networking-midonet/+bug/1779801 (in this case, neutron-lib and neutron are not compatible) possible solutions i can think of: - stop using autodoc (i suspect i have to do this for now) - make intermediate releases of neutron and friends - finish neutron-lib work and stop importing neutron etc (ideal but we have not reached this stage yet) - make doc job use tox-siblings (as it used to do in tox_install era) any suggestions? From rico.lin.guanyu at gmail.com Tue Jul 3 06:14:13 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 3 Jul 2018 14:14:13 +0800 Subject: [openstack-dev] [Openstack-sigs][self-healing][heat][vitrage][mistral] Self-Healing with Vitrage, Heat, and Mistral Message-ID: Dear all Back to Vancouver Summit, Ifat brings out the idea of integrating Heat, Vitrage, and Mistral to bring better self-healing scenario. For previous works, There already works cross Heat, Mistral, and Zaqar for self-healing [1]. And there is works cross Vitrage, and Mistral [2]. Now we plan to start working on integrating two works (as much as it can/should be) and to make sure the scenario works and keep it working. The integrated scenario flow will look something like this: An existing monitor detect host/network failure and send an alarm to Vitrage -> Vitrage deduces that the instance is down (based on the topology and based on Vitrage templates [2]) -> Vitrage triggers Mistral to fix the instance -> application is recovered We created an Etherpad [3] to document all discussion/feedbacks/plans (and will add more detail through time) Also, create a story in self-healing SIG to track all task. The current plans are: - A spec for Vitrage resources in Heat [5] - Create Vitrage resources in Heat - Write Heat Template and Vitrage Template for this scenario - A tempest task for above scenario - Add periodic job for this scenario (with above task). The best place to host this job (IMO) is under self-healing SIG To create a periodic job for self-healing sig means we might also need a place to manage those self-healing tempest test. For this scenario, I think it will make sense if we use heat-tempest-plugin to store that scenario test (since it will wrap as a Heat template) or use vitrage-tempest-plugin (since most of the test scenario are actually already there). Not sure what will happen if we create a new tempest plugin for self-healing and no manager for it. We still got some uncertainty to clear during working on it, but the big picture looks like all will works(if we doing all well on above tasks). Please provide your feedback or question if you have any. We do needs feedbacks and reviews on patches or any works. If you're interested in this, please join us (we need users/ops/devs!). [1] https://github.com/openstack/heat-templates/tree/master/hot/autohealing [2] https://github.com/openstack/self-healing-sig/blob/master/specs/vitrage-mistral-integration.rst [3] https://etherpad.openstack.org/p/self-healing-with-vitrage-mistral-heat [4] https://storyboard.openstack.org/#!/story/2002684 [5] https://review.openstack.org/#/c/578786 -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Jul 3 06:27:35 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 3 Jul 2018 08:27:35 +0200 Subject: [openstack-dev] [neutron] autodoc and tox-siblings In-Reply-To: References: Message-ID: <4383f1f9-44be-fce2-7215-0be76710e67f@suse.com> On 2018-07-03 06:12, Takashi Yamamoto wrote: > hi, > > - networking-midonet uses autodoc in their doc. > build-openstack-sphinx-docs runs it. > - build-openstack-sphinx-docs doesn't use tox-siblings. thus the job > uses released versions of dependencies. eg. neutron, neutron-XXXaas, > os-vif, etc > - released versions of dependencies and networking-midonet master are > not necessarily compatible > - a consequence: https://bugs.launchpad.net/networking-midonet/+bug/1779801 > (in this case, neutron-lib and neutron are not compatible) > > possible solutions i can think of: > - stop using autodoc (i suspect i have to do this for now) > - make intermediate releases of neutron and friends > - finish neutron-lib work and stop importing neutron etc (ideal but we > have not reached this stage yet) > - make doc job use tox-siblings (as it used to do in tox_install era) > > any suggestions? Did you see http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html - best discuss with Stephen whether that's a better solution, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From tdecacqu at redhat.com Tue Jul 3 07:39:58 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Tue, 03 Jul 2018 07:39:58 +0000 Subject: [openstack-dev] log-classify project update (anomaly detection in CI/CD logs) Message-ID: <1530601298.luby16yqut.tristanC@fedora> Hello, This is a follow-up to the initial project creation thread[0]. At the Vancouver Summit, we met to discuss ML for CI[1] and I lead a workshop on logreduce[2]. The log-classify project bootstrap is still waiting for review[3] and I am still looking forward to pushing logreduce[4] source code in openstack-infra/log-classify. The current implementation is working fine and I am going to enable it for every job running on Software Factory. However the core process HashingNeighbors[5] is rather slow (0.3MB per second) and I would like to improve it and/or implement other algorithms. To do that effectively, we need to gather more datasets[6]. I would like to propose some enhancements to the os-loganalyze[7] middleware to enable users to annotate and report anomalies they find in log files. To store the anoamlies reference, an additional webservice, or perhaps direct access to an elasticsearch cluster would be required. In parallel, we need to collect the users' feedback and create datasets daily using the baseline available at the time each anomaly was discovered. Ideally, we would create an ipfs (or any other network filesystem) that could then be used by anyone willing to work on $subject. There is a lot to do and it will be challening. To that effect, I would like to propose an initial meeting with all interested parties. Please register your irc name and timezone in this etherpad: https://etherpad.openstack.org/p/log-classify Due to OpenStack's exceptional infrastructure and recent Zuul v3 release, I think we are in a strong position to tackle this challenge. Others suggestions to bootstrap this effort within our community are welcome. Best regards, -Tristan [0] http://lists.openstack.org/pipermail/openstack-infra/2017-November/005676.html [1] https://etherpad.openstack.org/p/YVR-ml-ci-results [2] https://github.com/TristanCacqueray/anomaly-detection-workshop-opendev [3] https://review.openstack.org/#/q/topic:crm-import [4] git clone https://softwarefactory-project.io/r/logreduce [5] https://softwarefactory-project.io/cgit/logreduce/tree/logreduce/models.py [6] https://softwarefactory-project.io/cgit/logreduce-tests/tree/tests [7] https://review.openstack.org/#/q/topic:loganalyze-user-feedback -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Tue Jul 3 08:34:54 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 3 Jul 2018 10:34:54 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> Message-ID: <71ed149f-f244-363b-f121-cff541fb3abc@openstack.org> Zane Bitter wrote: >> [...] >> I think if OpenStack wants to gain back some of the steam it had >> before, it needs to adjust to the new world it is living in. This means: >>   * Consider abolishing the project walls. They are driving bad >> architecture (not intentionally but as a side affect of structure) > > In the spirit of cdent's blog post about random ideas: one idea I keep > coming back to (and it's been around for a while, I don't remember who > it first came from) is to start treating the compute node as a single > project (I guess the k8s equivalent would be a kubelet). Have a single > API - commands go in, events come out. Right, that's what SIG Node in Kubernetes is focused on: optimize what ends up running on the Kubernetes node. That's where their goal-oriented team structure shines, and why I'd like us to start organizing work along those lines as well (rather than along code repository ownership lines). > [...] > We probably actually need two groups: one to think about the > architecture of the user experience of OpenStack, and one to think about > the internal architecture as a whole. > > I'd be very enthusiastic about the TC chartering some group to work on > this. It has worried me for a long time that there is nobody designing > OpenStack as an whole; design is done at the level of individual > projects, and OpenStack is an ad-hoc collection of what they produce. > Unfortunately we did have an Architecture Working Group for a while (in > the sense of the second definition above), and it fizzled out because > there weren't enough people with enough time to work on it. Until we can > identify at least a theoretical reason why a new effort would be more > successful, I don't think there is going to be any appetite for trying > again. I agree. As one of the very few people that showed up to try to drive this working group, I could see that the people calling for more architectural up-front design are generally not the people showing up to help drive it. Because the reality of that work is not about having good ideas -- "put me in charge and I'll fix everything". It's about taking the time to document it, advocate for it, and yes, drive it and implement it across project team boundaries. It's a lot more work than posting a good idea on an email thread wondering why nobody else is doing it. Another thing we need to keep in mind is that OpenStack has a lot of successful users, and IMHO we can't afford to break them. Proposing incremental, backward-compatible change is therefore more productive than talking about how you would design OpenStack if you started today. -- Thierry Carrez (ttx) From renat.akhmerov at gmail.com Tue Jul 3 11:33:44 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 3 Jul 2018 18:33:44 +0700 Subject: [openstack-dev] [sqlalchemy][db][oslo.db][mistral] Is there a recommended MySQL driver for OpenStack projects? Message-ID: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> Hi, We’ve tried to address the bug [1] which is essentially caused by the fact that we saw that MySQLDb driver wasn’t compatible with eventlet’s green threads. In a nutshell, when we used the “eventlet” RPC executor (see [2]), the system would get stuck once in a while when dispatching green between green threads when it tried to hit Mysql, but since the driver wasn’t eventlet friendly it didn’t work. For that reason we had to use the “blocking” RPC executor so far for Mistral Engine that deals with DB transactions. Now, I am back to experiment with all this and see if we can actually switch to “eventlet” like most other project do. So far, it seems like the problem is gone in case if I’m using Pymysql driver (didn’t yet try other drivers like mysqlclient and the official mysql connector from Oracle). Previously, at least in production we always used MySQLDb on Python 2.7. So, I’m trying to understand if we have a “community recommended” (or may be even mandatory) Mysql driver to use and what consequences of the driver choice are. I’d appreciate any help with clarifying this (may be links to some previous discussions etc.) Thanks [1] https://bugs.launchpad.net/mistral/+bug/1696469 [2] https://docs.openstack.org/oslo.messaging/ocata/executors.html#eventlet Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue Jul 3 11:38:47 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 3 Jul 2018 11:38:47 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> <0B139046-4F69-452E-B390-C756543EA270@windriver.com> Message-ID: In-lined below, Greg From: "Csatari, Gergely (Nokia - HU/Budapest)" Date: Monday, July 2, 2018 at 7:15 AM To: Greg Waines , "openstack-dev at lists.openstack.org" , "edge-computing at lists.openstack.org" Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 29, 2018 4:25 AM In-lined comments / questions below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 28, 2018 at 3:35 AM Hi, I’ve added the following pros and cons to the different options: * One Glance with multiple backends [1] [Greg] I’m not sure I understand this option. Is each Glance Backend completely independent ? e.g. when I do a “glance image-create ...” am I specifying a backend and that’s where the image is to be stored ? This is what I was originally thinking. So I was thinking that synchronization of images to Edge Clouds is simply done by doing “glance image-create ...” to the appropriate backends. But then you say “The syncronisation of the image data is the responsibility of the backend (eg.: CEPH).” ... which makes it sound like my thinking above is wrong and the Backends are NOT completely independent, but instead in some sort of replication configuration ... is this leveraging ceph replication factor or something (for example) ? [G0]: According to my understanding the backends are in a replication configuration in this case. Jokke, am I right? * Pros: * Relatively easy to implement based on the current Glance architecture * Cons: * Requires the same Glance backend in every edge cloud instance * Requires the same OpenStack version in every edge cloud instance (apart from during upgrade) * Sensitivity for network connection loss is not clear [Greg] I could be wrong, but even though the OpenStack services in the edge clouds are using the images in their glance backend with a direct URL, I think the OpenStack services (e.g. nova) still need to get the direct URL via the Glance API which is ONLY available at the central site. So don’t think this option supports autonomy of edge Subcloud when connectivity is lost to central site. [G0]: Can’t the url point to the local Glance backend somehow? [Greg] Need some input from Nova or Glance guy, but believe that Nova must still use Glance API to get access to the image, however if the storage is remote, there can be some backend optimization between glance and nova, e.g. a direct URL to the image. * Several Glances with an independent syncronisation service, sych via Glance API [2] * Pros: * Every edge cloud instance can have a different Glance backend * Can support multiple OpenStack versions in the different edge cloud instances * Can be extended to support multiple VIM types * Cons: * Needs a new synchronisation service [Greg] Don’t believe this is a big con ... suspect we are going to need this new synchronization service for synchronizing resources of a number of other openstack services ... not just glance. [G0]: I agree, it is not a big con, but it is a con 😊 Should I add some note saying, that a synch service is most probably needed anyway? * Several Glances with an independent syncronisation service, synch using the backend [3] [Greg] This option seems a little odd to me. We are synching the GLANCE DB via some new synchronization service, but synching the Images themselves via the backend ... I think that would be tricky to ensure consistency. [G0]: Yes, there is a place for errors here. * Pros: * I could not find any * Cons: * Needs a new synchronisation service * One Glance and multiple Glance API servers [4] * Pros: * Implicitly location aware * Cons: * First usage of an image always takes a long time * In case of network connection error to the central Galnce Nova will have access to the images, but will not be able to figure out if the user have rights to use the image and will not have path to the images data [Greg] Yeah we tripped over the issue that although the Glance API can cache the image itself, it does NOT cache the image meta data (which I am guessing has info like “user access” etc.) ... so this option improves latency of access to image itself but does NOT provide autonomy. We plan on looking at options to resolve this, as we like the “implicit location awareness” of this option ... and believe it is an option that some customers will like. If anyone has any ideas ? Are these correct? Do I miss anything? Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API [3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend [4]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_and_multiple_Glance_API_servers From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Monday, June 11, 2018 4:29 PM To: Waines, Greg >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Thanks for the comments. I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 8, 2018 1:46 PM To: Csatari, Gergely (Nokia - HU/Budapest) >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: - What is the API between Glance and the Glance backends? - How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? - Is it possible to have different OpenStack versions in the different cloud instances? - Can a cloud instance use the locally synchronised images in case of a network connection break? - Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: - If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jul 3 12:04:57 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 3 Jul 2018 14:04:57 +0200 Subject: [openstack-dev] [Puppet] Requirements for running puppet unit tests? In-Reply-To: References: <20180629000402.cuf2tpdc4fsagnkk@redhat.com> Message-ID: <954ef03b-3603-90e3-525d-03d9e60dbf1f@redhat.com> On 07/02/2018 08:57 PM, Lars Kellogg-Stedman wrote: > On Thu, Jun 28, 2018 at 8:04 PM, Lars Kellogg-Stedman > wrote: > > What is required to successfully run the rspec tests? > > > On the odd chance that it might be useful to someone else, here's the Docker > image I'm using to successfully run the rspec tests for puppet-keystone: > > https://github.com/larsks/docker-image-rspec > > Available on docker hub  as larsks/rspec. Nice, thanks! Last time I tried the tests my ruby was too new, so I'll give this a try. > > Cheers, > > -- > Lars Kellogg-Stedman > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Tue Jul 3 12:08:04 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Jul 2018 12:08:04 +0000 Subject: [openstack-dev] [sqlalchemy][db][oslo.db][mistral] Is there a recommended MySQL driver for OpenStack projects? In-Reply-To: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> References: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> Message-ID: <20180703120803.mnbbozlzbd3ymrmm@yuggoth.org> On 2018-07-03 18:33:44 +0700 (+0700), Renat Akhmerov wrote: [...] > So, I’m trying to understand if we have a “community recommended” > (or may be even mandatory) Mysql driver to use and what > consequences of the driver choice are. I’d appreciate any help > with clarifying this (may be links to some previous discussions > etc.) [...] There was a concerted effort around the first half of 2015 (prior to TC cycle goals or it probably would have been one) in which most projects switched to pymysql by default because mysql-python lacked Py3k support. Another significant up-side to pymysql is that it was implemented in pure Python rather than being a wrapper around libmysql, so simpler for dependency management. There was initially some concern that it would underperform, but subsequent benchmarking showed it not to be an issue in reality. https://wiki.openstack.org/wiki/PyMySQL_evaluation -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Jul 3 12:47:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Jul 2018 08:47:04 -0400 Subject: [openstack-dev] [sqlalchemy][db][oslo.db][mistral] Is there a recommended MySQL driver for OpenStack projects? In-Reply-To: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> References: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> Message-ID: <1530621719-sup-3785@lrrr.local> Excerpts from Renat Akhmerov's message of 2018-07-03 18:33:44 +0700: > Hi, > > We’ve tried to address the bug [1] which is essentially caused > by the fact that we saw that MySQLDb driver wasn’t compatible with > eventlet’s green threads. In a nutshell, when we used the “eventlet” > RPC executor (see [2]), the system would get stuck once in a while > when dispatching green between green threads when it tried to hit > Mysql, but since the driver wasn’t eventlet friendly it didn’t work. > For that reason we had to use the “blocking” RPC executor so far > for Mistral Engine that deals with DB transactions. If you have a scaling issue that may be solved by eventlet, that's one thing, but please don't adopt eventlet just because a lot of other projects have. We've tried several times to minimize our reliance on eventlet because new releases tend to introduce bugs. Have you tried the 'threading' executor? Doug From doug at doughellmann.com Tue Jul 3 13:16:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Jul 2018 09:16:51 -0400 Subject: [openstack-dev] [neutron] autodoc and tox-siblings In-Reply-To: References: Message-ID: <1530623417-sup-5797@lrrr.local> Excerpts from Takashi Yamamoto's message of 2018-07-03 13:12:53 +0900: > hi, > > - networking-midonet uses autodoc in their doc. > build-openstack-sphinx-docs runs it. > - build-openstack-sphinx-docs doesn't use tox-siblings. thus the job > uses released versions of dependencies. eg. neutron, neutron-XXXaas, > os-vif, etc > - released versions of dependencies and networking-midonet master are > not necessarily compatible > - a consequence: https://bugs.launchpad.net/networking-midonet/+bug/1779801 > (in this case, neutron-lib and neutron are not compatible) > > possible solutions i can think of: > - stop using autodoc (i suspect i have to do this for now) > - make intermediate releases of neutron and friends > - finish neutron-lib work and stop importing neutron etc (ideal but we > have not reached this stage yet) > - make doc job use tox-siblings (as it used to do in tox_install era) > > any suggestions? > As part of the python3-first goal planning we determined that our current PTI defines the API between zuul and our jobs at the wrong level, and that is going to make it more difficult for us to support different versions of python on different branches. I plan to write up a governance change ASAP to have the PTI expect to call "tox -e docs" to build the documentation and then define a new job that works that way. It sounds like that change will also help in the case you have, since the new job will be able to support tox-siblings. Doug From jaypipes at gmail.com Tue Jul 3 13:59:49 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Jul 2018 09:59:49 -0400 Subject: [openstack-dev] [sqlalchemy][db][oslo.db][mistral] Is there a recommended MySQL driver for OpenStack projects? In-Reply-To: <1530621719-sup-3785@lrrr.local> References: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> <1530621719-sup-3785@lrrr.local> Message-ID: <1fe5626e-187a-861a-c67c-427a27426b3e@gmail.com> On 07/03/2018 08:47 AM, Doug Hellmann wrote: > If you have a scaling issue that may be solved by eventlet, that's > one thing, but please don't adopt eventlet just because a lot of > other projects have. We've tried several times to minimize our > reliance on eventlet because new releases tend to introduce bugs. > > Have you tried the 'threading' executor? +1 -jay From james.page at canonical.com Tue Jul 3 14:35:58 2018 From: james.page at canonical.com (James Page) Date: Tue, 3 Jul 2018 15:35:58 +0100 Subject: [openstack-dev] [sig][upgrade] Todays IRC meeting Message-ID: Hi All Unfortunately I can't make todays IRC meeting at 1600 UTC. Should be back for next week, but I think we need todo some rescheduling to fit better with other ops and dev meetings. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Jul 3 17:06:02 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Jul 2018 13:06:02 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> Message-ID: <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> On 07/02/2018 03:31 PM, Zane Bitter wrote: > On 28/06/18 15:09, Fox, Kevin M wrote: >>   * made the barrier to testing/development as low as 'curl >> http://......minikube; minikube start' (this spurs adoption and >> contribution) > > That's not so different from devstack though. > >>   * not having large silo's in deployment projects allowed better >> communication on common tooling. >>   * Operator focused architecture, not project based architecture. >> This simplifies the deployment situation greatly. >>   * try whenever possible to focus on just the commons and push vendor >> specific needs to plugins so vendors can deal with vendor issues >> directly and not corrupt the core. > > I agree with all of those, but to be fair to OpenStack, you're leaving > out arguably the most important one: > >     * Installation instructions start with "assume a working datacenter" > > They have that luxury; we do not. (To be clear, they are 100% right to > take full advantage of that luxury. Although if there are still folks > who go around saying that it's a trivial problem and OpenStackers must > all be idiots for making it look so difficult, they should really stop > embarrassing themselves.) This. There is nothing trivial about the creation of a working datacenter -- never mind a *well-running* datacenter. Comparing Kubernetes to OpenStack -- particular OpenStack's lower levels -- is missing this fundamental point and ends up comparing apples to oranges. Best, -jay From davanum at gmail.com Tue Jul 3 17:08:30 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 3 Jul 2018 13:08:30 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> Message-ID: On Tue, Jul 3, 2018 at 1:06 PM Jay Pipes wrote: > > On 07/02/2018 03:31 PM, Zane Bitter wrote: > > On 28/06/18 15:09, Fox, Kevin M wrote: > >> * made the barrier to testing/development as low as 'curl > >> http://......minikube; minikube start' (this spurs adoption and > >> contribution) > > > > That's not so different from devstack though. > > > >> * not having large silo's in deployment projects allowed better > >> communication on common tooling. > >> * Operator focused architecture, not project based architecture. > >> This simplifies the deployment situation greatly. > >> * try whenever possible to focus on just the commons and push vendor > >> specific needs to plugins so vendors can deal with vendor issues > >> directly and not corrupt the core. > > > > I agree with all of those, but to be fair to OpenStack, you're leaving > > out arguably the most important one: > > > > * Installation instructions start with "assume a working datacenter" > > > > They have that luxury; we do not. (To be clear, they are 100% right to > > take full advantage of that luxury. Although if there are still folks > > who go around saying that it's a trivial problem and OpenStackers must > > all be idiots for making it look so difficult, they should really stop > > embarrassing themselves.) > > This. > > There is nothing trivial about the creation of a working datacenter -- > never mind a *well-running* datacenter. Comparing Kubernetes to > OpenStack -- particular OpenStack's lower levels -- is missing this > fundamental point and ends up comparing apples to oranges. 100% > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From juliaashleykreger at gmail.com Tue Jul 3 17:21:37 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 3 Jul 2018 13:21:37 -0400 Subject: [openstack-dev] [ironic] The state of the ironic universe - July 2nd, 2018 Message-ID: The state of the ironic universe This month we're trying a new format to keep those interested updated on what is going on in ironic. The intent is for our weekly updates to now take the form of a monthly newsletter to cover highlights of what is going on in the ironic community. If you have something to add in the future, please feel free to reach out and we can add it to the next edition. News! ===== - Long deprecated 'classic' ('pxe_*', 'agent_*) drivers are being removed. They will not be present in the next major version of ironic. - Ironic now has support to return nodes from maintenance state when BMC connectivity is restored from an outage. - BIOS configuration caching and setting assertion interface has merged and vendors are working on their implementations. >From OpenInfra Days China! -------------------------- * Users in china are interested in ironic! * Everything from small hundreds to thousands, basic OS installation to super computing use cases! * The larger deployments are encountering some of the scale issues larger operators have experienced in the past. * The language barrier is making it difficult to grasp the finer details of: Deployment error reporting/troubleshooting and high availability mechanics. * Some operators are interested in the ability to "clone" or "backup" an ironic node's contents in order to redeploy elsewhere and/or restore the machine state. * Many operators wishing to contribute felt that they were unable to because "we are not [a] big name", that they would be unable to gain traction or build consensus by not being a major contributor already. In these discussions, I stressed that we all have similar, if not the same, problems that we are trying to solve. Julia wrote a recent SuperUser post about this.[1] >From the OpenStack Summit ------------------------- Operator interests vary, but there are some common problems that operators have or are interested in solving. Attestation/Security Integration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some operators and deployers seek to strengthen their security posture via the use of TPMs, registration and attestation of status with attestation servers. In a sense, think of it as profile enforcement of bare metal. An initial specification [2] has been posted to try and figure out some of the details and potential integration points. Firmware Management (Version Discovery/Assertion) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Operators are seeking more capabilities to discover the current version of firmware on a particular bare metal node and then possibly take corrective action through ironic in order to update the firmware. The developer community does not presently have a plan to tackle this challenge, however doing so moves us closer to being a sort of attestation service. RAID prior to deploy and Software RAID ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ One of the frequent asks is for support for enabling the assertion of RAID configuration prior to deployment. Naturally this is somewhat problematic as this task CAN greatly extend the deployment time. Presently deployment steps[6] are anticipated to enable these sorts of workflows. Additionally the ask for Software RAID support seems to be ramping up. This is not a simple task for us to achieve, but conceivably it might take the same shape as hardware raid presently does, just with appropriate interface mechanisms during the deployment process. There are several conundrums, and the community needs to better understand desired cases before development of a solution can take place. Serial Console Logging ~~~~~~~~~~~~~~~~~~~~~~ Numerous operators expressed interest in having console logging support. This last seems to have been worked on last year[3] and likely needs a contributor to pick back up and champion it forward. Hardware CMDB/Asset Discovery/Recording and Tracking ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ While not directly a problem of deploying bare metal directly, information about hardware is needed to tie in tasks such as repairs, warranty, tax accounting, and so on and so forth. Often these problems becomes "solved" by disparate processes tracking information in several different places. There is a growing ask for something in the community to aid in this effort. Jokingly, we've already kind of come up with a name, but the current main ironic developer community doesn't have time to take on this challenge. The most viable path forward for interested operators is likely to detail the requirements and begin working together to implement something with integration with ironic. Rack Awareness/Conductor Locality ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ironic is working on conductor locality, in terms of pinning specific bare metal nodes to specific ironic conductors. We hope that this functionality will be available in the final Rocky release.[4] Burn-in Tests ~~~~~~~~~~~~~ Operators expressed interest in having the capability to use ironic as part of burn-in proceses for hardware being added to the deployment. The developer community discussed implementing such tooling at the Rocky PTG and those discussions seemed to center around this being a clean step to perform some unknown actions on the ramdisk. The missing piece of the puzzle would be creating a "meta" step, and then executing additional steps. We mainly need to understand what would be good steps to implement in terms of actual actions to take for burning-in the node. Issues reported at the Summit ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L3 Networking Documentation ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Operators expressed a need for improved documentation in L3/multi-tenant networking integration. This is something the active developer community is attempting to improve as time permits. Mutlitenant networking + boot from volume without HBAs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ An increasing desire seems to exist to operate boot from volume with Multi-tenant networking, although without direct storage attachment on to that network, the routers need to take on the networking load for the IO operations. As such, this is something that we never anticipated during development of the feature. The community needs more information to better understand the operational scenario. Recent New Specifications ========================= * L3 based ironic deployments[7] This work aims to allow operators to deploy utilizing virtual media in remote data centers, where no DHCP is present. * Boot from Ramdisk[8] This is an often requested feature from the Scientific computing community, and may allow us to better support other types of ramdisk based booting, such as root on NFS and root on RBD. * Security Interface[9] There is a growing desire for support for integration into security frameworks, ultimately to enable better use of TPMs and/or enable tighter operator specific workflow integrations. This would benefit from operator feedback. * Synchronize events with neutron[10] This describes introduction of processes to enable ironic to better synchronize its actions with neutron. * Direct Deploy with local HTTP server[11] This is an feature that would allow operators to utilize the "direct" deployment interface with a local HTTP server instead of glance being backed by swift and using swift tempurls. Recently merged specifications ============================== * VNC Graphical Console [5] * Conductor/Node locality [4] Things that might make good Summit or conference talks ====================================================== * Talks about experiences scaling ironic or running ironic at scale. * Experiences about customizing drivers or hardware types. * New use cases! [1]: http://superuser.openstack.org/articles/translating-context-understanding-the-global-open-source-community/ [2]: https://review.openstack.org/576718 [3]: https://review.openstack.org/#/c/453627 [4]: https://review.openstack.org/#/c/559420 [5]: https://review.openstack.org/306074 [6]: https://review.openstack.org/#/c/549493/ [7]: https://review.openstack.org/543936 [8]: https://review.openstack.org/576717 [9]: https://review.openstack.org/576718 [10]: https://review.openstack.org/343684 [11]: https://review.openstack.org/#/c/504039/ From s at cassiba.com Tue Jul 3 18:10:37 2018 From: s at cassiba.com (Samuel Cassiba) Date: Tue, 3 Jul 2018 11:10:37 -0700 Subject: [openstack-dev] [chef] State of the Kitchen: 5th Edition Message-ID: HTML: https://s.cassiba.com/openstack/state-of-the-kitchen-5th-edition/ This is the fifth installment of what is going on with Chef OpenStack. The aim is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome on the content and what you'd like to see. Last month's edition was rather delayed due to an emergency surgery on one of my cats (he's doing fine) but other things took priority. Going forward, I'm going to stick as close to to the beginning of the month as I can. This will be a thin installment, as there were only a few things of note. ### Notable Changes * Nova APIs are now WSGI services handled by Apache. * Keystone has been reduced down to a single 'public' endpoint. ### Integration * Dokken works on both platforms with an ugly workaround. Presently, this results in allinone scenarios converging and testing inside a container. ### Upcoming * Testing against RDO Rocky packages works. More to come, probably in a blog post. * Ubuntu 18.04 results in a mostly functional OpenStack instance, but it bumps into Python 3 problems along the way. * The mariadb cookbook has undergone a significant refactor resulting in a 2.0.0, but might not be updated until the focus switches to Rocky. ### On The Menu *Not Really "Instant" Roast Beast* (makes 4 servings, 2 if you're hungry) * 3 lbs / 1.3 kg bottom round beef roast, frozen to aid tenderizing * 3 cups / 700ml beef stock * 1 medium onion, sliced * 1 tsp / 4.2g minced garlic * Ground cayenne, granulated onion and garlic to taste. 1. Add a layer of sliced onions to the bottom of your electric pressure cooker (you DO have one, right?) 2. Add frozen(!) meat on top of the onions. 3. Add garlic, remaining onion pieces and powdered spices to the cooker. Do NOT add salt at this stage, as tempting as it may be. 4. Cook at medium pressure for 90 minutes. Allow for the pressure to reduce naturally. It can take an additional 30 minutes or more. Patience is rewarded. 5. Remove roast to a large dish, shred until it's to your preferred consistency. Optionally, remove the onion pieces, they've given their all. 6. Add xanthan gum or your preferred choice of thickener. Use cornstarch or flour if you're not super carb-conscious. Hit it with the immersion blender. 7. Return shredded meat to what could be misconstrued as gravy. Salt to taste and dig in. It gets better if you leave it overnight in the fridge to allow the flavors to redistribute. Your humble line cook, Samuel Cassiba From doug at doughellmann.com Tue Jul 3 18:16:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Jul 2018 14:16:18 -0400 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July Message-ID: <1530641744-sup-28@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Approved changes: * add a note about tracking cycle goals after a cycle ends: https://review.openstack.org/#/c/577149/ Office hour logs from last week: * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-27-01.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-28-15.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-07-03-09.01.html In the absence of any feedback about using the meeting bot to record the office hours, we will continue to do so, for now. == Ongoing Discussions == The Adjutant team application is still under discussion. * https://review.openstack.org/553643 project team application Discussions about affiliation diversity continue in two directions. Zane's proposal for requirements for new project teams has stalled a bit. The work Thierry and Mohammed have done on the diversity tags has brought a new statistics script and a proposal to drop the use of the tags in favor of folding the diversity information into the more general health checks we are doing. Thierry has updated the health tracker page with information about all of the teams based on the most recent run of the scripts. * Zane's proposal for requirements for new projects: * https://review.openstack.org/#/c/567944/ * Thierry's proposal to drop the tags: * https://review.openstack.org/#/c/579870/ * Team "fragility" scripts: https://review.openstack.org/#/c/579142/ Thierry proposed changes to the Project Team Guide to include a technical guidance section. * https://review.openstack.org/#/c/578070/1 Chris and Thierry have been discussing a technical vision for OpenStack. * https://etherpad.openstack.org/p/tech-vision-2018 * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T09:09:57 == TC member actions/focus/discussions for the coming week(s) == Please vote on the Adjutant team application. https://review.openstack.org/553643 Project team "health check" interviews continue. Our goal is to check in with each team between now and the PTG, and record notes in the wiki. * https://wiki.openstack.org/wiki/OpenStack_health_tracker Remember that we agreed to send status updates on initiatives separately to openstack-dev every two weeks. If you are working on something for which there has not been an update in a couple of weeks, please consider summarizing the status. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From cdent+os at anticdent.org Tue Jul 3 18:31:43 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 3 Jul 2018 19:31:43 +0100 (BST) Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: <1530641744-sup-28@lrrr.local> References: <1530641744-sup-28@lrrr.local> Message-ID: On Tue, 3 Jul 2018, Doug Hellmann wrote: > Chris and Thierry have been discussing a technical vision for OpenStack. > > * https://etherpad.openstack.org/p/tech-vision-2018 Just so it's clear and credit where credit (or blame!) is due: Zane has been a leading part of this too. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From Kevin.Fox at pnnl.gov Tue Jul 3 18:37:50 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 3 Jul 2018 18:37:50 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> , <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C143256@EX10MBOX03.pnnl.gov> Yes/no on the vendor distro thing. They do provide a lot of options, but they also provide a fully k8s tested/provided route too. kubeadm. I can take linux distro of choice, curl down kubeadm and get a working kubernetes in literally a couple minutes. No compiling anything or building containers. That is what I mean when I say they have a product. Other vendors provide their own builds, release tooling, or config management integration. which is why that list is so big. But it is up to the Operators to decide the route and due to k8s having a very clean, easy, low bar for entry it sets the bar for the other products to be even better. The reason people started adopting clouds was because it was very quick to request resources. One of clouds features (some say drawbacks) vs VM farms has been ephemeralness. You build applications on top of VMs to provide a Service to your Users. Great. Things like Containers though launch much faster and have generally more functionality for plumbing them together then VMs do though. So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware? Without strong orchestration of some kind on top the cloudy use case is also really hard with Nova. So we keep getting into this tug of war between people wanting VM's as a building blocks of cloud scale applications, and those that want Nova to be an oVirt/VMware replacement. Honestly, its not doing either use case great because it cant decide what to focus on. oVirt is a better VMware alternative today then Nova is, having used it. It focuses specifically on the same use cases. Nova is better at being a cloud then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having apps self host on it. (progress is being made again finally. but its slow) While some people only ever consider running Kubernetes on top of a cloud, some of us realize maintaining both a cloud an a kubernetes is unnecessary and can greatly simplify things simply by running k8s on bare metal. This does then make it a competitor to Nova as a platform for running workload on. As k8s gains more multitenancy features, this trend will continue to grow I think. OpenStack needs to be ready for when that becomes a thing. Heat is a good start for an orchestration system, but it is hamstrung by it being an optional component, by there still not being a way to download secrets to a vm securely from the secret store, by the secret store also being completely optional, etc. An app developer can't rely on any of it. :/ Heat is hamstrung by the lack of blessing so many other OpenStack services are. You can't fix it until you fix that fundamental brokenness in OpenStack. Heat is also hamstrung being an orchestrator of existing API's by there being holes in the API's. Think of OpenStack like a game console. The moment you make a component optional and make it takes extra effort to obtain, few software developers target it and rarely does anyone one buy the addons it because there isn't software for it. Right now, just about everything in OpenStack is an addon. Thats a problem. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Monday, July 02, 2018 4:13 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 06/27/2018 07:23 PM, Zane Bitter wrote: > On 27/06/18 07:55, Jay Pipes wrote: >> Above, I was saying that the scope of the *OpenStack* community is >> already too broad (IMHO). An example of projects that have made the >> *OpenStack* community too broad are purpose-built telco applications >> like Tacker [1] and Service Function Chaining. [2] >> >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may >> or may not have anything to do with the core OpenStack mission (and >> have more to do with those companies' product roadmap). > > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. You have to have motivation from vendors in order to achieve said single way of installing OpenStack. I gave up a long time ago on distros and vendors to get behind such an effort. Where vendors see $$$, they will attempt to carve out value differentiation. And value differentiation leads to, well, differences, naturally. And, despite what some might misguidedly think, Kubernetes has no single installation method. Their *official* setup/install page is here: https://kubernetes.io/docs/setup/pick-right-solution/ It lists no fewer than *37* (!) different ways of installing Kubernetes, and I'm not even including anything listed in the "Custom Solutions" section. > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. Sure, I agree the separate bucket is useful, particularly when paired with information that allows operators to know how stable and/or bleeding edge the code is expected to be -- you know, those "tags" that the TC spent time curating. >>> So to answer your question: >>> >>> zaneb: yeah... nobody I know who argues for a small stable >>> core (in Nova) has ever said there should be fewer higher layer >>> services. >>> zaneb: I'm not entirely sure where you got that idea from. >> >> Note the emphasis on *Nova* above? >> >> Also note that when I've said that *OpenStack* should have a smaller >> mission and scope, that doesn't mean that higher-level services aren't >> necessary or wanted. > > Thank you for saying this, and could I please ask you to repeat this > disclaimer whenever you talk about a smaller scope for OpenStack. Yes. I shall shout it from the highest mountains. [1] > Because for those of us working on higher-level services it feels like > there has been a non-stop chorus (both inside and outside the project) > of people wanting to redefine OpenStack as something that doesn't > include us. I've said in the past (on Twitter, can't find the link right now, but it's out there somewhere) something to the effect of "at some point, someone just needs to come out and say that OpenStack is, at its core, Nova, Neutron, Keystone, Glance and Cinder". Perhaps this is what you were recollecting. I would use a different phrase nowadays to describe what I was thinking with the above. I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are a definitive lower level of an OpenStack deployment. They represent a set of required integrated services that supply the most basic infrastructure for datacenter resource management when deploying OpenStack." Note the difference in wording. Instead of saying "OpenStack is X", I'm saying "These particular services represent a specific layer of an OpenStack deployment". Nowadays, I would further add something to the effect of "Depending on the particular use cases and workloads the OpenStack deployer wishes to promote, an additional layer of services provides workload orchestration and workflow management capabilities. This layer of services include Heat, Mistral, Tacker, Service Function Chaining, Murano, etc". Does that provide you with some closure on this feeling of "non-stop chorus" of exclusion that you mentioned above? > The reason I haven't dropped this discussion is because I really want to > know if _all_ of those people were actually talking about something else > (e.g. a smaller scope for Nova), or if it's just you. Because you and I > are in complete agreement that Nova has grown a lot of obscure > capabilities that make it fiendishly difficult to maintain, and that in > many cases might never have been requested if we'd had higher-level > tools that could meet the same use cases by composing simpler operations. > > IMHO some of the contributing factors to that were: > > * The aforementioned hostility from some quarters to the existence of > higher-level projects in OpenStack. > * The ongoing hostility of operators to deploying any projects outside > of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the > Barbican vs. Castellan debate, where we can't even correct one of > OpenStack's original sins and bake in a secret store - something k8s > managed from day one - because people don't want to install another ReST > API even over a backend that they'll already have to install anyway). > * The illegibility of public Nova interfaces to potential higher-level > tools. I would like to point something else out here. Something that may not be pleasant to confront. Heat's competition (for resources and mindshare) is Kubernetes, plain and simple. Heat's competition is not other OpenStack projects. Nova's competition is not Kubernetes (despite various people continuing to say that it is). Nova is not an orchestration system. Never was and (as long as I'm kicking and screaming) never will be. Nova's primary competition is: * Stand-alone Ironic * oVirt and stand-alone virsh callers * Parts of VMWare vCenter [3] * MaaS in some respects * The *compute provisioning* parts of EC2, Azure, and GCP This is why there is a Kubernetes OpenStack cloud provider plugin [4]. This plugin uses Nova [5] (which can potentially use Ironic), Cinder, Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes cluster and load balancer objects to act as the proxies that k8s itself uses when deploying Pods and Services. Heat's architecture, template language and object constructs are in direct competition with Kubernetes' API and architecture, with the primary difference being a VM-centric [6] vs. a container-centric object model. Heat's template language is similar to Helm's chart template YAML structure [7], and with Heat's evolution to the "convergence model", Heat's architecture actually got closer to Kubernetes' architecture: that of continually attempting to converge an observed state with a desired state. So, what is Heat to do? The hype and marketing machine is never-ending, I'm afraid. [8] I'm not sure there's actually anything that can be done about this. Perhaps it is a fait accomplis that Kubernetes/Helm will/has become synonymous with "orchestration of things". Perhaps not. I'm not an oracle, unfortunately. Maybe the only thing that Heat can do to fend off the coming doom is to make a case that Heat's performance, reliability, feature set or integration with OpenStack's other services make it a better candidate for orchestrating virtual machine or baremetal workloads on an OpenStack deployment than Kubernetes is. Sorry to be the bearer of bad news, -jay [1] I live in Florida, though, which has no mountains. But, when I visit, say, North Carolina, I shall certainly shout it from their mountains. [2] some would also say Castellan, Ironic and Designate belong here. [3] Though VMWare is still trying to be everything that certain IT administrators ever needed, including orchestration, backup services, block storage pooling, high availability, quota management, etc etc [4] https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack [5] https://github.com/kubernetes/kubernetes/blob/92b81114f43f3ca74988194406957a5d1ffd1c5d/pkg/cloudprovider/providers/openstack/openstack.go#L377 [6] The fact that Heat started as a CloudFormation API clone gave it its VM-centricity. [7] https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/index.md [8] The Kubernetes' machine has essentially decimated all the other "orchestration of things" projects' resources and mindshare, including a number of them that were very well architected, well coded, and well documented: * Mesos with Marathon/Aurora * Rancher * OpenShift (you know, the original, original one...) * Nomad * Docker Swarm/Compose __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jul 3 18:52:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Jul 2018 14:52:40 -0400 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: References: <1530641744-sup-28@lrrr.local> Message-ID: <1530643922-sup-6213@lrrr.local> Excerpts from Chris Dent's message of 2018-07-03 19:31:43 +0100: > On Tue, 3 Jul 2018, Doug Hellmann wrote: > > > Chris and Thierry have been discussing a technical vision for OpenStack. > > > > * https://etherpad.openstack.org/p/tech-vision-2018 > > Just so it's clear and credit where credit (or blame!) is due: Zane > has been a leading part of this too. > Thanks, Chris. Zane, I apologize for the oversight. Doug From Brianna.Poulos at jhuapl.edu Tue Jul 3 19:13:14 2018 From: Brianna.Poulos at jhuapl.edu (Poulos, Brianna L.) Date: Tue, 3 Jul 2018 19:13:14 +0000 Subject: [openstack-dev] [barbican][cinder][glance][nova] Goodbye from JHUAPL Message-ID: All, After over five years of contributing security features to OpenStack, the JHUAPL team is wrapping up our involvement with OpenStack. To all who have reviewed/improved/accepted our contributions, thank you. It has been a pleasure to be a part of the community. Regards, The JHUAPL OpenStack Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Jul 3 19:25:20 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Jul 2018 15:25:20 -0400 Subject: [openstack-dev] [barbican][cinder][glance][nova] Goodbye from JHUAPL In-Reply-To: References: Message-ID: <24561929-325d-2580-1bcd-5a6ebf8fbf71@gmail.com> Thanks so much for your contributions to our ecosystem, Brianna! I'm sad to see you go! :( Best, -jay On 07/03/2018 03:13 PM, Poulos, Brianna L. wrote: > All, > > After over five years of contributing security features to OpenStack, > the JHUAPL team is wrapping up our involvement with OpenStack. > > To all who have reviewed/improved/accepted our contributions, thank > you.  It has been a pleasure to be a part of the community. > > Regards, > > The JHUAPL OpenStack Team > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Kevin.Fox at pnnl.gov Tue Jul 3 19:37:01 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 3 Jul 2018 19:37:01 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <47f04147-d83d-4766-15e5-4e8e6e7c3a82@gmail.com> <1A3C52DFCD06494D8528644858247BF01C142AA9@EX10MBOX03.pnnl.gov>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C1432B2@EX10MBOX03.pnnl.gov> Heh. your not going to like it. :) The very fastest path I can think of but is super disruptive is the following (there are also less disruptive paths): First, define what OpenStack will be. If you don't know, you easily run into people working across purposes. Maybe there are other things that will be sister projects. that's fine. But it needs to be a whole product/project, not split on interests. think k8s sigs not openstack projects. The final result is a singular thing though. k8s x.y.z. openstack iaas 2.y.z or something like that. Have a look at what KubeVirt is doing. I think they have the right approach. Then, define K8s to be part of the commons. They provide a large amount of functionality OpenStack needs in the commons. If it is there, you can reuse it and not reinvent it. Implement a new version of each OpenStack services api on top of K8s api using CRD's. At the same time, as we now defined what OpenStack will be, ensure the API has all the base use cases covered. Provide a rest service -> crd adapter to enable backwards compatibility with older OpenStack api versions. This completely removes statefullness from OpenStack services. Rather then have a dozen databases you have just an etcd system under the hood. It provides locking, and events as well. so no oslo.locking backing service, no message queue, no sql databases. This GREATLY simplifies what the operators need to do. This removes a lot of code too. Backups are simpler as there is only one thing. Operators life is drastically simpler. upgrade tools should be unified. you upgrade your openstack deployment, not upgrade nova, upgrade glance, upgrade neutron, ..., etc Config can be easier as you can ship config with the same mechanism. Currently the operator tries to define cluster config and it gets twisted and split up per project/per node/sub component. Service account stuff is handled by kubernetes service accounts. so no rpc over amqp security layer and shipping around credentials manually in config files, and figuring out how to roll credentials, etc. agent stuff is much simpler. less code. Provide prebuilt containers for all of your components and some basic tooling to deploy it on a k8s. K8s provides a lot of tooling here. We've been building it over and over in deployment tools. we can get rid of most of it. Use http for everything. We all have acknowledged we have been torturing rabbit for a while. but its still a critical piece of infrastructure at the core today. We need to stop. Provide a way to have a k8s secret poked into a vm. I could go on, but I think there is enough discussion points here already. And I wonder if anyone made it this far without their head exploding already. :) Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Monday, July 02, 2018 2:45 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 07/02/2018 03:12 PM, Fox, Kevin M wrote: > I think a lot of the pushback around not adding more common/required services is the extra load it puts on ops though. hence these: >> * Consider abolishing the project walls. >> * simplify the architecture for ops > > IMO, those need to change to break free from the pushback and make progress on the commons again. What *specifically* would you do, Kevin? -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Tue Jul 3 19:48:17 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 3 Jul 2018 19:48:17 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com>, <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly hard. In fact, it is easier then you might think. k8s is a platform for deploying/managing services. Guess what you need to provision bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe infrastructure. pixiecore with a simple http backend works pretty well in practice. a service to provide installation instructions. nginx server handing out kickstart files for example. and a place to fetch rpms from in case you don't have internet access or want to ensure uniformity. nginx server with a mirror yum repo. Its even possible to seed it on minikube and sluff it off to its own cluster. The main hard part about it is currently no one is shipping a reference implementation of the above. That may change... It is certainly much much easier then deploying enough OpenStack to get a self hosting ironic working. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, July 03, 2018 10:06 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 07/02/2018 03:31 PM, Zane Bitter wrote: > On 28/06/18 15:09, Fox, Kevin M wrote: >> * made the barrier to testing/development as low as 'curl >> http://......minikube; minikube start' (this spurs adoption and >> contribution) > > That's not so different from devstack though. > >> * not having large silo's in deployment projects allowed better >> communication on common tooling. >> * Operator focused architecture, not project based architecture. >> This simplifies the deployment situation greatly. >> * try whenever possible to focus on just the commons and push vendor >> specific needs to plugins so vendors can deal with vendor issues >> directly and not corrupt the core. > > I agree with all of those, but to be fair to OpenStack, you're leaving > out arguably the most important one: > > * Installation instructions start with "assume a working datacenter" > > They have that luxury; we do not. (To be clear, they are 100% right to > take full advantage of that luxury. Although if there are still folks > who go around saying that it's a trivial problem and OpenStackers must > all be idiots for making it look so difficult, they should really stop > embarrassing themselves.) This. There is nothing trivial about the creation of a working datacenter -- never mind a *well-running* datacenter. Comparing Kubernetes to OpenStack -- particular OpenStack's lower levels -- is missing this fundamental point and ends up comparing apples to oranges. Best, -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Tue Jul 3 20:04:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Jul 2018 16:04:21 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C143256@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <1A3C52DFCD06494D8528644858247BF01C143256@EX10MBOX03.pnnl.gov> Message-ID: <480ac9d2-65c1-39ff-ec0d-bceade3e1def@gmail.com> I'll answer inline, so that it's easier to understand what part of your message I'm responding to. On 07/03/2018 02:37 PM, Fox, Kevin M wrote: > Yes/no on the vendor distro thing. They do provide a lot of options, but they also provide a fully k8s tested/provided route too. kubeadm. I can take linux distro of choice, curl down kubeadm and get a working kubernetes in literally a couple minutes. How is this different from devstack? With both approaches: * Download and run a single script * Any sort of networking outside of super basic setup requires manual intervention * Not recommended for "production" * Require workarounds when running as not-root Is it that you prefer the single Go binary approach of kubeadm which hides much of the details that devstack was designed to output (to help teach people what's going on under the hood)? > No compiling anything or building containers. That is what I mean when I say they have a product. What does devstack compile? By "compile" are you referring to downloading code from git repositories? Or are you referring to the fact that with kubeadm you are downloading a Go binary that hides the downloading and installation of all the other Kubernetes images for you [1]? [1] https://github.com/kubernetes/kubernetes/blob/8d73473ce8118422c9e0c2ba8ea669ebbf8cee1c/cmd/kubeadm/app/cmd/init.go#L267 https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go#L63 > Other vendors provide their own builds, release tooling, or config management integration. which is why that list is so big. But it is up to the Operators to decide the route and due to k8s having a very clean, easy, low bar for entry it sets the bar for the other products to be even better. I fail to see how devstack and kubeadm aren't very much in the same vein? > The reason people started adopting clouds was because it was very quick to request resources. One of clouds features (some say drawbacks) vs VM farms has been ephemeralness. You build applications on top of VMs to provide a Service to your Users. Great. Things like Containers though launch much faster and have generally more functionality for plumbing them together then VMs do though. Not sure what this has to do with what we've been discussing. > So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware? "production VM" use case like oVirt or VMWare? I don't know what that means. You mean "a GUI-based VM management system"? > Without strong orchestration of some kind on top the cloudy use case is also really hard with Nova. So we keep getting into this tug of war between people wanting VM's as a building blocks of cloud scale applications, and those that want Nova to be an oVirt/VMware replacement. Honestly, its not doing either use case great because it cant decide what to focus on. No, that's not at all what I've been saying. I continue to see Nova (and other services in its layer of OpenStack) as a building block *for higher-level systems like Kubernetes or Heat*. There is a reason that Kubernetes has an OpenStack cloud provider plugin, and that plugin calls imperative Nova, Neutron, Cinder, and Keystone API calls. > oVirt is a better VMware alternative today then Nova is, having used it. It focuses specifically on the same use cases. Nova is better at being a cloud then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having apps self host on it. (progress is being made again finally. but its slow) I'm not particularly interested in having Nova be a free VMWare replacement -- or in trying to be whatever oVirt has become. Some might see usefulness in these things, and as long as the feature requests to Nova don't cause Nova to become something other than low-level compute plumbing, I'm fine with that. > While some people only ever consider running Kubernetes on top of a cloud, some of us realize maintaining both a cloud an a kubernetes is unnecessary and can greatly simplify things simply by running k8s on bare metal. This does then make it a competitor to Nova as a platform for running workload on. What percentage of Kubernetes users deploy on baremetal (and continue to deploy on baremetal in production as opposed to just toying around with it)? > As k8s gains more multitenancy features, this trend will continue to grow I think. OpenStack needs to be ready for when that becomes a thing. OpenStack is already multi-tenant, being designed as such from day one. With the exception of Ironic, which uses Nova to enable multi-tenancy. What specifically are you referring to "OpenStack needs to be ready"? Also, what specific parts of OpenStack are you referring to there? > Heat is a good start for an orchestration system, but it is hamstrung by it being an optional component, by there still not being a way to download secrets to a vm securely from the secret store, by the secret store also being completely optional, etc. An app developer can't rely on any of it. :/ Heat is hamstrung by the lack of blessing so many other OpenStack services are. You can't fix it until you fix that fundamental brokenness in OpenStack. I guess I just fundamentally disagree that having a monolithic all-things-for-all-users application architecture and feature set is something that OpenStack should be. There is a *reason* that Kubernetes jettisoned all the cloud provider code from its core. The reason is because setting up that base stuff is *hard* and that work isn't germane to what Kubernetes is (a container orchestration system, not a datacenter resource management system). > Heat is also hamstrung being an orchestrator of existing API's by there being holes in the API's. I agree there are some holes in some of the APIs. Happy to work on plugging those holes as long as the holes are properly identified as belonging to the correct API and are not simply a feature request what would expand the scope of lower-level plumbing services like Nova. > Think of OpenStack like a game console. The moment you make a component optional and make it takes extra effort to obtain, few software developers target it and rarely does anyone one buy the addons it because there isn't software for it. Right now, just about everything in OpenStack is an addon. Thats a problem. I don't have any game consoles nor do I develop software for them, so I don't really see the correlation here. That said, I'm 100% against a monolithic application approach, as I've mentioned before. Best, -jay > Thanks, > Kevin > > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Monday, July 02, 2018 4:13 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 06/27/2018 07:23 PM, Zane Bitter wrote: >> On 27/06/18 07:55, Jay Pipes wrote: >>> Above, I was saying that the scope of the *OpenStack* community is >>> already too broad (IMHO). An example of projects that have made the >>> *OpenStack* community too broad are purpose-built telco applications >>> like Tacker [1] and Service Function Chaining. [2] >>> >>> I've also argued in the past that all distro- or vendor-specific >>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >>> OpenStack because these projects are more products and the relentless >>> drive of vendor product management (rightfully) pushes the scope of >>> these applications to gobble up more and more feature space that may >>> or may not have anything to do with the core OpenStack mission (and >>> have more to do with those companies' product roadmap). >> >> I'm still sad that we've never managed to come up with a single way to >> install OpenStack. The amount of duplicated effort expended on that >> problem is mind-boggling. At least we tried though. Excluding those >> projects from the community would have just meant giving up from the >> beginning. > > You have to have motivation from vendors in order to achieve said single > way of installing OpenStack. I gave up a long time ago on distros and > vendors to get behind such an effort. > > Where vendors see $$$, they will attempt to carve out value > differentiation. And value differentiation leads to, well, differences, > naturally. > > And, despite what some might misguidedly think, Kubernetes has no single > installation method. Their *official* setup/install page is here: > > https://kubernetes.io/docs/setup/pick-right-solution/ > > It lists no fewer than *37* (!) different ways of installing Kubernetes, > and I'm not even including anything listed in the "Custom Solutions" > section. > >> I think Thierry's new map, that collects installer services in a >> separate bucket (that may eventually come with a separate git namespace) >> is a helpful way of communicating to users what's happening without >> forcing those projects outside of the community. > > Sure, I agree the separate bucket is useful, particularly when paired > with information that allows operators to know how stable and/or > bleeding edge the code is expected to be -- you know, those "tags" that > the TC spent time curating. > >>>> So to answer your question: >>>> >>>> zaneb: yeah... nobody I know who argues for a small stable >>>> core (in Nova) has ever said there should be fewer higher layer >>>> services. >>>> zaneb: I'm not entirely sure where you got that idea from. >>> >>> Note the emphasis on *Nova* above? >>> >>> Also note that when I've said that *OpenStack* should have a smaller >>> mission and scope, that doesn't mean that higher-level services aren't >>> necessary or wanted. >> >> Thank you for saying this, and could I please ask you to repeat this >> disclaimer whenever you talk about a smaller scope for OpenStack. > > Yes. I shall shout it from the highest mountains. [1] > >> Because for those of us working on higher-level services it feels like >> there has been a non-stop chorus (both inside and outside the project) >> of people wanting to redefine OpenStack as something that doesn't >> include us. > > I've said in the past (on Twitter, can't find the link right now, but > it's out there somewhere) something to the effect of "at some point, > someone just needs to come out and say that OpenStack is, at its core, > Nova, Neutron, Keystone, Glance and Cinder". > > Perhaps this is what you were recollecting. I would use a different > phrase nowadays to describe what I was thinking with the above. > > I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are > a definitive lower level of an OpenStack deployment. They represent a > set of required integrated services that supply the most basic > infrastructure for datacenter resource management when deploying OpenStack." > > Note the difference in wording. Instead of saying "OpenStack is X", I'm > saying "These particular services represent a specific layer of an > OpenStack deployment". > > Nowadays, I would further add something to the effect of "Depending on > the particular use cases and workloads the OpenStack deployer wishes to > promote, an additional layer of services provides workload orchestration > and workflow management capabilities. This layer of services include > Heat, Mistral, Tacker, Service Function Chaining, Murano, etc". > > Does that provide you with some closure on this feeling of "non-stop > chorus" of exclusion that you mentioned above? > >> The reason I haven't dropped this discussion is because I really want to >> know if _all_ of those people were actually talking about something else >> (e.g. a smaller scope for Nova), or if it's just you. Because you and I >> are in complete agreement that Nova has grown a lot of obscure >> capabilities that make it fiendishly difficult to maintain, and that in >> many cases might never have been requested if we'd had higher-level >> tools that could meet the same use cases by composing simpler operations. >> >> IMHO some of the contributing factors to that were: >> >> * The aforementioned hostility from some quarters to the existence of >> higher-level projects in OpenStack. >> * The ongoing hostility of operators to deploying any projects outside >> of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the >> Barbican vs. Castellan debate, where we can't even correct one of >> OpenStack's original sins and bake in a secret store - something k8s >> managed from day one - because people don't want to install another ReST >> API even over a backend that they'll already have to install anyway). >> * The illegibility of public Nova interfaces to potential higher-level >> tools. > > I would like to point something else out here. Something that may not be > pleasant to confront. > > Heat's competition (for resources and mindshare) is Kubernetes, plain > and simple. > > Heat's competition is not other OpenStack projects. > > Nova's competition is not Kubernetes (despite various people continuing > to say that it is). > > Nova is not an orchestration system. Never was and (as long as I'm > kicking and screaming) never will be. > > Nova's primary competition is: > > * Stand-alone Ironic > * oVirt and stand-alone virsh callers > * Parts of VMWare vCenter [3] > * MaaS in some respects > * The *compute provisioning* parts of EC2, Azure, and GCP > > This is why there is a Kubernetes OpenStack cloud provider plugin [4]. > > This plugin uses Nova [5] (which can potentially use Ironic), Cinder, > Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes > cluster and load balancer objects to act as the proxies that k8s itself > uses when deploying Pods and Services. > > Heat's architecture, template language and object constructs are in > direct competition with Kubernetes' API and architecture, with the > primary difference being a VM-centric [6] vs. a container-centric object > model. > > Heat's template language is similar to Helm's chart template YAML > structure [7], and with Heat's evolution to the "convergence model", > Heat's architecture actually got closer to Kubernetes' architecture: > that of continually attempting to converge an observed state with a > desired state. > > So, what is Heat to do? > > The hype and marketing machine is never-ending, I'm afraid. [8] > > I'm not sure there's actually anything that can be done about this. > Perhaps it is a fait accomplis that Kubernetes/Helm will/has become > synonymous with "orchestration of things". Perhaps not. I'm not an > oracle, unfortunately. > > Maybe the only thing that Heat can do to fend off the coming doom is to > make a case that Heat's performance, reliability, feature set or > integration with OpenStack's other services make it a better candidate > for orchestrating virtual machine or baremetal workloads on an OpenStack > deployment than Kubernetes is. > > Sorry to be the bearer of bad news, > -jay > > [1] I live in Florida, though, which has no mountains. But, when I > visit, say, North Carolina, I shall certainly shout it from their mountains. > > [2] some would also say Castellan, Ironic and Designate belong here. > > [3] Though VMWare is still trying to be everything that certain IT > administrators ever needed, including orchestration, backup services, > block storage pooling, high availability, quota management, etc etc > > [4] > https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack > > [5] > https://github.com/kubernetes/kubernetes/blob/92b81114f43f3ca74988194406957a5d1ffd1c5d/pkg/cloudprovider/providers/openstack/openstack.go#L377 > > [6] The fact that Heat started as a CloudFormation API clone gave it its > VM-centricity. > > [7] > https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/index.md > > [8] The Kubernetes' machine has essentially decimated all the other > "orchestration of things" projects' resources and mindshare, including a > number of them that were very well architected, well coded, and well > documented: > > * Mesos with Marathon/Aurora > * Rancher > * OpenShift (you know, the original, original one...) > * Nomad > * Docker Swarm/Compose > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Tue Jul 3 22:01:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 3 Jul 2018 23:01:05 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-27 Message-ID: HTML: https://anticdent.org/tc-report-18-27.html This week's TC Report will be relatively short. I wrote a lot of OpenStack related words yesterday in [Some Opinions On Openstack](https://anticdent.org/some-opinions-on-openstack.html). That post was related to one of the main themes that has shown up in IRC and email discussions recently: creating a [technical vision](https://etherpad.openstack.org/p/tech-vision-2018) for the near future of OpenStack. One idea has been to [separate plumbing from porcelain](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33). There's also a [long email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-July/131944.html) considering many ideas. One idea from that which I particularly like is unifying all the various agents that live on a compute node into one agent, one that likely talks to `etcd`. `nodelet` like a `kubelet`. None of this is something that will happen overnight. I hope at least some if it does, eventually. Some change that's actually in progress now: For a long time OpenStack has tracked the organizational diversity of contributors to the various sub-projects. There's been a fair bit of talk that the tracking doesn't map to reality in a useful way and we need to [figure out what to do about it](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-28.log.html#t2018-06-28T15:06:49). That has resulted in a plan to [remove team diversity tags](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-28.log.html#t2018-06-28T15:06:49) and instead use a more holistic approach to being aware of and dealing with what's now being called "fragility" in teams. One aspect of this is the human-managed [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker). Julia went to China for an OpenStack event and her eyes were opened about the different context contributors there experience. She wrote a [superuser post](http://superuser.openstack.org/articles/translating-context-understanding-the-global-open-source-community/) and there's been subsequent related IRC discussion about [the challenges that people in China experience](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-27.log.html#t2018-06-27T14:06:00) trying to participate in OpenStack. More generally there is a need to figure out some ways to build a [shared context](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-03.log.html#t2018-07-03T09:11:10) that involves people who are not a part of our usual circles. As usual, one of the main outcomes of that was that we need to make the time to share and talk more and in a more accessible fashion. We see bursts of that (we're seeing one now) but how do we sustain it and how do we extract some agreement that can lead to concerted action? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From whayutin at redhat.com Tue Jul 3 23:13:41 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Jul 2018 17:13:41 -0600 Subject: [openstack-dev] [tripleo] TripleO CI squad status 7/03/2018 Message-ID: Greetings The TripleO squad has just completed Sprint 15 (6-15 - 7/03). The following is a summary of activities during this sprint. Epic: # Sprint 15 Epic (CI Squad): Begin migration of upstream jobs to native zuulv3. For a list of the completed and remaining items for the sprint please refer to the following Epic card and the sub cards. https://trello.com/c/bQuQ9aWF/802-sprint-15-ci-goals Items to Note: * Timeouts in jobs are a recurring issue upstream. How to handle and fix the timeouts is under discussion. Note, containers may be contributing to the timeouts. Ruck / Rover: TripleO Master, 0 days since last promotion TripleO Queens, 2 days since last promotion TripleO Pike, 20 days since last promotion * This is failing in tempest and should be resolved with https://review.openstack.org/#/c/579937/ https://review.rdoproject.org/etherpad/p/ruckrover-sprint15 CRITICAL IN PROGRESS #1779561 No realm key for 'realm1' tripleo Assignee: None Reporter: wes hayutin 2 days old Tags: promotion-blocker 6 CRITICAL IN PROGRESS #1779271 periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens Details: volume c414293d-eb0f-4d74-8b4d-f9a15e23d399 failed to reach in-use status (current available) within the required time (500 s). tripleo Assignee: yatin Reporter: Quique Llorente 4 days old Tags: promotion-blocker 14 CRITICAL FIX RELEASED #1779263 "AnsibleUndefinedVariable: 'dict object' has no attribute 'overcloud'"} at periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset010-master tripleo Assignee: Quique Llorente Reporter: Quique Llorente 4 days old Tags: promotion-blocker 6 CRITICAL FIX RELEASED #1778847 fs027 __init__() got an unexpected keyword argument 'cafile' tripleo Assignee: wes hayutin Reporter: Quique Llorente 6 days old Tags: promotion-blocker quickstart 6 CRITICAL FIX RELEASED #1778472 docker pull failed: Get https://registry-1.docker.io/v2/tripleomaster/centos-binary-rsyslog-base/manifests/current-tripleo: received unexpected HTTP status: 503 Service Unavailable tripleo Assignee: Quique Llorente Reporter: Quique Llorente 8 days old Tags: alert ci promotion-blocker 6 CRITICAL FIX RELEASED #1778201 os-refresh-config undercloud install Error: Evaluation Error: Error while evaluating a Function Call, pick(): must receive at least one non empty tripleo Assignee: Quique Llorente Reporter: Quique Llorente 11 days old Tags: ci promotion-blocker 6 CRITICAL FIX RELEASED #1778040 Error at overcloud_prep_containers Package: qpid-dispatch-router-0.8.0-1.el7.x86_64 (@delorean-master-testing)", " Requires: libqpid-proton.so.10()(64bit) tripleo Assignee: Quique Llorente Reporter: Quique Llorente 12 days old Tags: alert ci promotion-blocker quickstart 10 CRITICAL FIX RELEASED #1777759 pike, volume failed to build in error status. list index out of range in cinder tripleo Assignee: wes hayutin Reporter: wes hayutin 13 days old Tags: alert promotion-blocker 12 CRITICAL FIX RELEASED #1777616 Undercloud installation is failing: Class[Neutron]: has no parameter named 'rabbit_hosts' tripleo Assignee: yatin Reporter: yatin 14 days old Tags: alert promotion-blocker 6 CRITICAL FIX RELEASED #1777541 undercloud install error, mistra 503 unavailable tripleo Assignee: Alex Schultz Reporter: wes hayutin 14 days old Tags: alert promotion-blocker 10 CRITICAL FIX RELEASED #1777451 Error: /Stage[main]/Ceph::Rgw::Keystone::Auth/Keystone_role Duplicate entry found with name Member tripleo Assignee: Quique Llorente Reporter: wes hayutin 15 days old Tags: promotion-blocker 18 CRITICAL FIX RELEASED #1777261 convert-overcloud-undercloud.yml fails on missing update_containers variable tripleo Assignee: Sagi (Sergey) Shnaidman Reporter: wes hayutin 17 days old Tags: promotion-blocker 6 CRITICAL FIX RELEASED #1777168 Failures to build python-networking-ovn tripleo Assignee: Emilien Macchi Reporter: Emilien Macchi 18 days old Tags: alert ci promotion-blocker 6 CRITICAL FIX RELEASED #1777130 RDO cloud is down tripleo Assignee: Quique Llorente Reporter: Quique Llorente 18 days old Tags: alert promotion-blocker -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Wed Jul 4 00:18:19 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Wed, 4 Jul 2018 00:18:19 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <480ac9d2-65c1-39ff-ec0d-bceade3e1def@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <1A3C52DFCD06494D8528644858247BF01C143256@EX10MBOX03.pnnl.gov> <480ac9d2-65c1-39ff-ec0d-bceade3e1def@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C143481@EX10MBOX03.pnnl.gov> Replying inline in outlook. Sorry. :( Prefixing with KF> -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] Sent: Tuesday, July 03, 2018 1:04 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 I'll answer inline, so that it's easier to understand what part of your message I'm responding to. On 07/03/2018 02:37 PM, Fox, Kevin M wrote: > Yes/no on the vendor distro thing. They do provide a lot of options, but they also provide a fully k8s tested/provided route too. kubeadm. I can take linux distro of choice, curl down kubeadm and get a working kubernetes in literally a couple minutes. How is this different from devstack? With both approaches: * Download and run a single script * Any sort of networking outside of super basic setup requires manual intervention * Not recommended for "production" * Require workarounds when running as not-root Is it that you prefer the single Go binary approach of kubeadm which hides much of the details that devstack was designed to output (to help teach people what's going on under the hood)? KF> so... go to https://docs.openstack.org/devstack/latest/ and one of the first things you see is a bright red Warning box. Don't run it on your laptop. It also targets git master rather then production releases so it is more targeted at developing on openstack itself rather then developers developing their software to run in openstack. My common use case was developing stuff to run in, not developing openstack itself. minikube makes this case first class. Also, it requires a linux box to deploy it. Minikube works on macos and windows as well. Yeah, not really an easy thing to do, but it does it pretty well. I did a presentation on Kubernetes once, put up a slide on minikube, and 5 slides later, one of the physicists in the room said, btw, I have it working on my mac (personal laptop). Not trying to slam devstack. It really is a good piece of software. but it still has a ways to go to get to that point. And lastly, minikube's default bootstrapper these days is kubeadm. So the kubernetes you get to develop against is REALLY close to one you could deploy yourself at scale in vms or on bare metal. The tools/containers it uses are byte identical. They will behave the same. Devstack is very different then most production deployments. > No compiling anything or building containers. That is what I mean when I say they have a product. What does devstack compile? By "compile" are you referring to downloading code from git repositories? Or are you referring to the fact that with kubeadm you are downloading a Go binary that hides the downloading and installation of all the other Kubernetes images for you [1]? KF> The go binary orchestrates a bit, but for the most part, you get one system package installed (or use one statically linked binary) kubelet. From there, you switch to using prebuilt containers for all the other services. Those binaries have been through a build / test/ release pipeline and are guaranteed to be the same between all the nodes you install them on. It is easy to run a deployment on your test cluster, and ensure it works the same way on your production system. You can do the same with say rpms, but then you need to build up plumbing to mirror your rpms and plumbing to promote from testing to production, etc. Then you have to configure all the nodes to not accidently pull from a remote rpm mirror. Some of the system updates try really hard to reenable that. :/ K8s gives you easy testing/promotion by the way they tag things and prebuild stuff for you. So you just tweak your k8s version and off go. You don't have to mirror if you don't want to. Lower barrier to entry there. [1] https://github.com/kubernetes/kubernetes/blob/8d73473ce8118422c9e0c2ba8ea669ebbf8cee1c/cmd/kubeadm/app/cmd/init.go#L267 https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go#L63 > Other vendors provide their own builds, release tooling, or config management integration. which is why that list is so big. But it is up to the Operators to decide the route and due to k8s having a very clean, easy, low bar for entry it sets the bar for the other products to be even better. I fail to see how devstack and kubeadm aren't very much in the same vein? KF> You've switched from comparing devstack and minikube to devstack and kubeadm. Kubeadm is plumbing to build dev, test, and production systems. Devstack is very much only ever intended for the dev phase. And like I said before, a little more focused on the dev of openstack itself, not of deving code running in it. Minikube is really intended to allow devs to develop software to run inside k8s and behave as much as possible to a full k8s cluster. > The reason people started adopting clouds was because it was very quick to request resources. One of clouds features (some say drawbacks) vs VM farms has been ephemeralness. You build applications on top of VMs to provide a Service to your Users. Great. Things like Containers though launch much faster and have generally more functionality for plumbing them together then VMs do though. Not sure what this has to do with what we've been discussing. KF> We'll skip it for now... Maybe it will become a little more clear in the context of other responses below. > So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware? "production VM" use case like oVirt or VMWare? I don't know what that means. You mean "a GUI-based VM management system"? Pets vs Cattle. VMware/oVirt's primary focus is on being feature rich around around keeping pets alive/happy/responsive/etc. Live migration, cpu/memory hot plugging... > Without strong orchestration of some kind on top the cloudy use case is also really hard with Nova. So we keep getting into this tug of war between people wanting VM's as a building blocks of cloud scale applications, and those that want Nova to be an oVirt/VMware replacement. Honestly, its not doing either use case great because it cant decide what to focus on. No, that's not at all what I've been saying. I continue to see Nova (and other services in its layer of OpenStack) as a building block *for higher-level systems like Kubernetes or Heat*. There is a reason that Kubernetes has an OpenStack cloud provider plugin, and that plugin calls imperative Nova, Neutron, Cinder, and Keystone API calls. KF> Yeah, sorry, that wasn't intended at you specifically. I've talked to the Nova team many times over the years and saw that back and forth happening. Some folks wanted more and more pet like features pushed in, and others wanted to optimize more for cattle. Its kind of in an uncomfortable middle ground now I think. No one ever defined specifically what was in/out of scope for Nova. Same issue as defining what OpenStack was, but just at the Nova level. > oVirt is a better VMware alternative today then Nova is, having used it. It focuses specifically on the same use cases. Nova is better at being a cloud then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having apps self host on it. (progress is being made again finally. but its slow) I'm not particularly interested in having Nova be a free VMWare replacement -- or in trying to be whatever oVirt has become. KF> I mention it as I don't think all of Nova devs feel the same. Some might see usefulness in these things, and as long as the feature requests to Nova don't cause Nova to become something other than low-level compute plumbing, I'm fine with that. KF> I think it already has affected things. There are pettish features in it and there are now so many features in Nova that Nova is pushing back against new features that could help the low-level compute plumbing use case. > While some people only ever consider running Kubernetes on top of a cloud, some of us realize maintaining both a cloud an a kubernetes is unnecessary and can greatly simplify things simply by running k8s on bare metal. This does then make it a competitor to Nova as a platform for running workload on. What percentage of Kubernetes users deploy on baremetal (and continue to deploy on baremetal in production as opposed to just toying around with it)? KF> I do not have metrics. Just my own experience. I've seen several clusters now go from in vm to bare metal though as its very expensive to upgrade OpenStack and they really didn't need it anymore. Or the workload could be split between oVirt and Kubernetes on bare metal. Long term though, if VM's become first class citizens on k8s, I could see k8s do both jobs easily. > As k8s gains more multitenancy features, this trend will continue to grow I think. OpenStack needs to be ready for when that becomes a thing. OpenStack is already multi-tenant, being designed as such from day one. With the exception of Ironic, which uses Nova to enable multi-tenancy. KF> Yes, but at a really high cost. What specifically are you referring to "OpenStack needs to be ready"? Also, what specific parts of OpenStack are you referring to there? KF> If something like k8s gains full multitenancy, then one of OpenStack's major remaining selling points vanishes and the difference in operator overhead becomes even more pronounced. "Why pay the operator overhead of OpenStack when you could just do K8s." OpenStack needs to pay down some of that operator related technical debt before k8s gains multitenancy and maybe vm support. What I mean here is, users care about deploying workload to their datacenter. They care that it makes it easy. I really could care less if it was containers or vms provided it worked well. The api k8s gives to do so is very smooth and getting smoother. On the openstack side, its been very bumpy and progressing very slowly. I fought for years to smooth it out and still the main road bumps are there. > Heat is a good start for an orchestration system, but it is hamstrung by it being an optional component, by there still not being a way to download secrets to a vm securely from the secret store, by the secret store also being completely optional, etc. An app developer can't rely on any of it. :/ Heat is hamstrung by the lack of blessing so many other OpenStack services are. You can't fix it until you fix that fundamental brokenness in OpenStack. I guess I just fundamentally disagree that having a monolithic all-things-for-all-users application architecture and feature set is something that OpenStack should be. KF> I'm not necessarily arguing for that... More like how the Linux Kernel is monolithic and modular at the same time. You can customize it for all sorts of really strange hardware with and without large chunks of things. BUT, you also can just grab a prebuilt distro kernel and have it work on a lot of machines without issue. KF> /me puts his app developer hat back on. As an app developer, you need a reliable base platform to target. If you can't rely on stuff like Orchestration always being there, you have 3 choices. You limit your customer base to only those that have the component installed (usually not acceptable), You write everything yourself (expensive) or if available, you develop on a platform that gives you more out of the box (what is happening with app devs moving quickly away from openstack to things like k8s. Sorry. Hard to say/hear.) There is a *reason* that Kubernetes jettisoned all the cloud provider code from its core. The reason is because setting up that base stuff is *hard* and that work isn't germane to what Kubernetes is (a container orchestration system, not a datacenter resource management system). KF> Disagree on that one. CSI is happening at the same time in the same way for almost the same reasons. They jettisoned it because: * They are following more and more the philosophy of eating their own dogfood. You should be able to deploy parts of Kubernetes with Kubernetes. * Their api has finally become robust enough that it is reasonable to self enhance that part. * They got to the point that other pressing issues were solved and they could tackle kicking it out of tree * Having it in tree was slowing down development/functionality. (This is the reason to make it a little bit harder for the ops in exchange for the clear benefits.) KF> Like I said before, I'm not necessarily saying all code has to be wrapped up into a big ball and "plugins" are not a good thing. I think plugins are hugely important. But, I am arguing that fewer things are probably better for operators and splitting everything out into a million pieces without regard to that or having a good reason to do so is a kind of pre-optimization. > Heat is also hamstrung being an orchestrator of existing API's by there being holes in the API's. I agree there are some holes in some of the APIs. Happy to work on plugging those holes as long as the holes are properly identified as belonging to the correct API and are not simply a feature request what would expand the scope of lower-level plumbing services like Nova. KF> That’s been the struggle I've always hit with OpenStack. Getting leads from each project involved to cooperatively decide where the heck an api belongs. IMO, the api belongs to OpenStack! Not to Nova or Neutron or Glance. OpenStacks project api's are riddled with cases where we couldn't pick the right place off the bat. Lots of litter in the Nova api in particular. Then it went the other way. Everyone's so afraid of adopting an api forever that they can't make a decision. K8s solved it by having a K8s api, followed by the code equivalents of nova/glance/etc picking things up as needed from the api server. Api is separated from what repo/service hosts the code. KF> The real problem is OpenStack does not have an api. :/ it only has projects that have api's. > Think of OpenStack like a game console. The moment you make a component optional and make it takes extra effort to obtain, few software developers target it and rarely does anyone one buy the addons it because there isn't software for it. Right now, just about everything in OpenStack is an addon. Thats a problem. I don't have any game consoles nor do I develop software for them, so I don't really see the correlation here. That said, I'm 100% against a monolithic application approach, as I've mentioned before. KF> Bad analogy then. Sorry. Hopefully the monolithic subject was addressed adequately above. KF> Thanks, KF> Kevin Best, -jay > Thanks, > Kevin > > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Monday, July 02, 2018 4:13 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 06/27/2018 07:23 PM, Zane Bitter wrote: >> On 27/06/18 07:55, Jay Pipes wrote: >>> Above, I was saying that the scope of the *OpenStack* community is >>> already too broad (IMHO). An example of projects that have made the >>> *OpenStack* community too broad are purpose-built telco applications >>> like Tacker [1] and Service Function Chaining. [2] >>> >>> I've also argued in the past that all distro- or vendor-specific >>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >>> OpenStack because these projects are more products and the relentless >>> drive of vendor product management (rightfully) pushes the scope of >>> these applications to gobble up more and more feature space that may >>> or may not have anything to do with the core OpenStack mission (and >>> have more to do with those companies' product roadmap). >> >> I'm still sad that we've never managed to come up with a single way to >> install OpenStack. The amount of duplicated effort expended on that >> problem is mind-boggling. At least we tried though. Excluding those >> projects from the community would have just meant giving up from the >> beginning. > > You have to have motivation from vendors in order to achieve said single > way of installing OpenStack. I gave up a long time ago on distros and > vendors to get behind such an effort. > > Where vendors see $$$, they will attempt to carve out value > differentiation. And value differentiation leads to, well, differences, > naturally. > > And, despite what some might misguidedly think, Kubernetes has no single > installation method. Their *official* setup/install page is here: > > https://kubernetes.io/docs/setup/pick-right-solution/ > > It lists no fewer than *37* (!) different ways of installing Kubernetes, > and I'm not even including anything listed in the "Custom Solutions" > section. > >> I think Thierry's new map, that collects installer services in a >> separate bucket (that may eventually come with a separate git namespace) >> is a helpful way of communicating to users what's happening without >> forcing those projects outside of the community. > > Sure, I agree the separate bucket is useful, particularly when paired > with information that allows operators to know how stable and/or > bleeding edge the code is expected to be -- you know, those "tags" that > the TC spent time curating. > >>>> So to answer your question: >>>> >>>> zaneb: yeah... nobody I know who argues for a small stable >>>> core (in Nova) has ever said there should be fewer higher layer >>>> services. >>>> zaneb: I'm not entirely sure where you got that idea from. >>> >>> Note the emphasis on *Nova* above? >>> >>> Also note that when I've said that *OpenStack* should have a smaller >>> mission and scope, that doesn't mean that higher-level services aren't >>> necessary or wanted. >> >> Thank you for saying this, and could I please ask you to repeat this >> disclaimer whenever you talk about a smaller scope for OpenStack. > > Yes. I shall shout it from the highest mountains. [1] > >> Because for those of us working on higher-level services it feels like >> there has been a non-stop chorus (both inside and outside the project) >> of people wanting to redefine OpenStack as something that doesn't >> include us. > > I've said in the past (on Twitter, can't find the link right now, but > it's out there somewhere) something to the effect of "at some point, > someone just needs to come out and say that OpenStack is, at its core, > Nova, Neutron, Keystone, Glance and Cinder". > > Perhaps this is what you were recollecting. I would use a different > phrase nowadays to describe what I was thinking with the above. > > I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are > a definitive lower level of an OpenStack deployment. They represent a > set of required integrated services that supply the most basic > infrastructure for datacenter resource management when deploying OpenStack." > > Note the difference in wording. Instead of saying "OpenStack is X", I'm > saying "These particular services represent a specific layer of an > OpenStack deployment". > > Nowadays, I would further add something to the effect of "Depending on > the particular use cases and workloads the OpenStack deployer wishes to > promote, an additional layer of services provides workload orchestration > and workflow management capabilities. This layer of services include > Heat, Mistral, Tacker, Service Function Chaining, Murano, etc". > > Does that provide you with some closure on this feeling of "non-stop > chorus" of exclusion that you mentioned above? > >> The reason I haven't dropped this discussion is because I really want to >> know if _all_ of those people were actually talking about something else >> (e.g. a smaller scope for Nova), or if it's just you. Because you and I >> are in complete agreement that Nova has grown a lot of obscure >> capabilities that make it fiendishly difficult to maintain, and that in >> many cases might never have been requested if we'd had higher-level >> tools that could meet the same use cases by composing simpler operations. >> >> IMHO some of the contributing factors to that were: >> >> * The aforementioned hostility from some quarters to the existence of >> higher-level projects in OpenStack. >> * The ongoing hostility of operators to deploying any projects outside >> of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the >> Barbican vs. Castellan debate, where we can't even correct one of >> OpenStack's original sins and bake in a secret store - something k8s >> managed from day one - because people don't want to install another ReST >> API even over a backend that they'll already have to install anyway). >> * The illegibility of public Nova interfaces to potential higher-level >> tools. > > I would like to point something else out here. Something that may not be > pleasant to confront. > > Heat's competition (for resources and mindshare) is Kubernetes, plain > and simple. > > Heat's competition is not other OpenStack projects. > > Nova's competition is not Kubernetes (despite various people continuing > to say that it is). > > Nova is not an orchestration system. Never was and (as long as I'm > kicking and screaming) never will be. > > Nova's primary competition is: > > * Stand-alone Ironic > * oVirt and stand-alone virsh callers > * Parts of VMWare vCenter [3] > * MaaS in some respects > * The *compute provisioning* parts of EC2, Azure, and GCP > > This is why there is a Kubernetes OpenStack cloud provider plugin [4]. > > This plugin uses Nova [5] (which can potentially use Ironic), Cinder, > Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes > cluster and load balancer objects to act as the proxies that k8s itself > uses when deploying Pods and Services. > > Heat's architecture, template language and object constructs are in > direct competition with Kubernetes' API and architecture, with the > primary difference being a VM-centric [6] vs. a container-centric object > model. > > Heat's template language is similar to Helm's chart template YAML > structure [7], and with Heat's evolution to the "convergence model", > Heat's architecture actually got closer to Kubernetes' architecture: > that of continually attempting to converge an observed state with a > desired state. > > So, what is Heat to do? > > The hype and marketing machine is never-ending, I'm afraid. [8] > > I'm not sure there's actually anything that can be done about this. > Perhaps it is a fait accomplis that Kubernetes/Helm will/has become > synonymous with "orchestration of things". Perhaps not. I'm not an > oracle, unfortunately. > > Maybe the only thing that Heat can do to fend off the coming doom is to > make a case that Heat's performance, reliability, feature set or > integration with OpenStack's other services make it a better candidate > for orchestrating virtual machine or baremetal workloads on an OpenStack > deployment than Kubernetes is. > > Sorry to be the bearer of bad news, > -jay > > [1] I live in Florida, though, which has no mountains. But, when I > visit, say, North Carolina, I shall certainly shout it from their mountains. > > [2] some would also say Castellan, Ironic and Designate belong here. > > [3] Though VMWare is still trying to be everything that certain IT > administrators ever needed, including orchestration, backup services, > block storage pooling, high availability, quota management, etc etc > > [4] > https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack > > [5] > https://github.com/kubernetes/kubernetes/blob/92b81114f43f3ca74988194406957a5d1ffd1c5d/pkg/cloudprovider/providers/openstack/openstack.go#L377 > > [6] The fact that Heat started as a CloudFormation API clone gave it its > VM-centricity. > > [7] > https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/index.md > > [8] The Kubernetes' machine has essentially decimated all the other > "orchestration of things" projects' resources and mindshare, including a > number of them that were very well architected, well coded, and well > documented: > > * Mesos with Marathon/Aurora > * Rancher > * OpenShift (you know, the original, original one...) > * Nomad > * Docker Swarm/Compose > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lars at redhat.com Wed Jul 4 01:29:37 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 3 Jul 2018 21:29:37 -0400 Subject: [openstack-dev] [puppet][tripleo] Why is this acceptance test failing? Message-ID: <20180704012937.6eoffaxeeeq4oadg@redhat.com> I need another set of eyes. I have a review that keeps failing here: http://logs.openstack.org/47/575147/16/check/puppet-openstack-beaker-centos-7/3f70cc9/job-output.txt.gz#_2018-07-04_00_42_19_696966 It's looking for the regular expression: /Puppet::Type::Keystone_tenant::ProviderOpenstack: Support for a resource without the domain.*using 'Default'.*default domain id is '/ The output shown in the failure message contains: [1;33mWarning: Puppet::Type::Keystone_tenant::ProviderOpenstack: Support for a resource without the domain set is deprecated in Liberty cycle. It will be dropped in the M-cycle. Currently using 'Default' as default domain name while the default domain id is '7ddf1dfa7fac46679ba7ae2245bece2f'.[0m The regular expression matches the text! The failing test is here: https://github.com/openstack/puppet-keystone/blob/master/spec/acceptance/default_domain_spec.rb#L59 I've been staring at this for a while and I'm not sure what's going on. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From hongbin034 at gmail.com Wed Jul 4 02:06:41 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 3 Jul 2018 22:06:41 -0400 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: <1530641744-sup-28@lrrr.local> References: <1530641744-sup-28@lrrr.local> Message-ID: > > Discussions about affiliation diversity continue in two directions. > Zane's proposal for requirements for new project teams has stalled a > bit. The work Thierry and Mohammed have done on the diversity tags has > brought a new statistics script and a proposal to drop the use of the > tags in favor of folding the diversity information into the more general > health checks we are doing. Thierry has updated the health tracker page > Hi, If appropriate, I would rather to nominate myself as the liaison for the Zun project. I am the first PTL of the project and familiar with the current status. I should be more appropriate for doing the health evaluation for this project. Please let me know if it is possible for me to participant. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jul 4 02:38:27 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Jul 2018 20:38:27 -0600 Subject: [openstack-dev] [tripleo] TripleO Tempest squad status 07/03/2018 Message-ID: Greetings, The TripleO Tempest squad has just completed Sprint 15 (6-15 - 7/03). The following is a summary of activities during this sprint. Epic: # Sprint 15 Epic ( Tempest Squad): Finish the refactoring of the core OpenStack services in python-tempestconf, refstack certification tests, and other miscellaneous work. Chandan was on PTO most of the sprint, Arx was active helping to resolve tempest issues, and Martin was focused on refstack certifications so the progress was a little less than normal this sprint. For a list of the completed and remaining items for the sprint please refer to the following Epic card and the sub cards. https://trello.com/c/6QKG0HkU/801-sprint-15-python-tempestconf Items to Note: * Full runs of tempest are again fully passing in upstream master, queens. Pike will be unblocked when https://review.openstack.org/#/c/579937/ merges. * Chandan has volunteered to ruck / rove this sprint, so the team will again be operating with only two active team members * New documentation was created for containerized tempest * https://docs.openstack.org/tripleo-docs/latest/install/basic_deployment/tempest.html#running-containerized-tempest-manually * Look for an upstream discussion around moving as much tempest documentation to the tempest project as possible. * Sprint 16 is the final sprint that will focus on refactoring python-tempestconf. -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Jul 4 02:43:40 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Jul 2018 20:43:40 -0600 Subject: [openstack-dev] [tripleo] TripleO CI squad status 7/03/2018 In-Reply-To: References: Message-ID: Apologies, The dark theme in my browser changed the font color of the text. The email is available at http://lists.openstack.org/pipermail/openstack-dev/2018-July/131984.html Thank you! On Tue, Jul 3, 2018 at 7:13 PM Wesley Hayutin wrote: > Greetings > > The TripleO squad has just completed Sprint 15 (6-15 - 7/03). > The following is a summary of activities during this sprint. > > Epic: > # Sprint 15 Epic (CI Squad): Begin migration of upstream jobs to native > zuulv3. > For a list of the completed and remaining items for the sprint please > refer to the following Epic card and the sub cards. > https://trello.com/c/bQuQ9aWF/802-sprint-15-ci-goals > > Items to Note: > * Timeouts in jobs are a recurring issue upstream. How to handle and fix > the timeouts is under discussion. Note, containers may be contributing to > the timeouts. > > Ruck / Rover: > > TripleO Master, 0 days since last promotion > TripleO Queens, 2 days since last promotion > TripleO Pike, 20 days since last promotion > * This is failing in tempest and should be resolved with > https://review.openstack.org/#/c/579937/ > > https://review.rdoproject.org/etherpad/p/ruckrover-sprint15 > > > CRITICAL IN PROGRESS > #1779561 No realm key for 'realm1' > tripleo Assignee: None Reporter: wes hayutin 2 days old Tags: > promotion-blocker 6 > CRITICAL IN PROGRESS > #1779271 periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens > Details: volume c414293d-eb0f-4d74-8b4d-f9a15e23d399 failed to reach in-use > status (current available) within the required time (500 s). > tripleo Assignee: yatin Reporter: Quique Llorente 4 days old Tags: > promotion-blocker 14 > CRITICAL FIX RELEASED > #1779263 "AnsibleUndefinedVariable: 'dict object' has no attribute > 'overcloud'"} at > periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset010-master > tripleo Assignee: Quique Llorente Reporter: Quique Llorente 4 days old > Tags: promotion-blocker 6 > CRITICAL FIX RELEASED > #1778847 fs027 __init__() got an unexpected keyword argument 'cafile' > tripleo Assignee: wes hayutin Reporter: Quique Llorente 6 days old > Tags: promotion-blocker quickstart 6 > CRITICAL FIX RELEASED > #1778472 docker pull failed: Get > https://registry-1.docker.io/v2/tripleomaster/centos-binary-rsyslog-base/manifests/current-tripleo: > received unexpected HTTP status: 503 Service Unavailable > tripleo Assignee: Quique Llorente Reporter: Quique Llorente 8 days old > Tags: alert ci promotion-blocker 6 > CRITICAL FIX RELEASED > #1778201 os-refresh-config undercloud install Error: Evaluation Error: > Error while evaluating a Function Call, pick(): must receive at least one > non empty > tripleo Assignee: Quique Llorente Reporter: Quique Llorente 11 days > old Tags: ci promotion-blocker 6 > CRITICAL FIX RELEASED > #1778040 Error at overcloud_prep_containers Package: > qpid-dispatch-router-0.8.0-1.el7.x86_64 (@delorean-master-testing)", " > Requires: libqpid-proton.so.10()(64bit) > tripleo Assignee: Quique Llorente Reporter: Quique Llorente 12 days old > Tags: alert ci promotion-blocker quickstart 10 > CRITICAL FIX RELEASED > #1777759 pike, volume failed to build in error status. list index out of > range in cinder > tripleo Assignee: wes hayutin Reporter: wes hayutin 13 days old Tags: > alert promotion-blocker 12 > CRITICAL FIX RELEASED > #1777616 Undercloud installation is failing: Class[Neutron]: has no > parameter named 'rabbit_hosts' > tripleo Assignee: yatin Reporter: yatin 14 days old Tags: alert > promotion-blocker 6 > CRITICAL FIX RELEASED > #1777541 undercloud install error, mistra 503 unavailable > tripleo Assignee: Alex Schultz Reporter: wes hayutin 14 days old Tags: > alert promotion-blocker 10 > CRITICAL FIX RELEASED > #1777451 Error: /Stage[main]/Ceph::Rgw::Keystone::Auth/Keystone_role > Duplicate entry found with name Member > tripleo Assignee: Quique Llorente Reporter: wes hayutin 15 days old > Tags: promotion-blocker 18 > CRITICAL FIX RELEASED > #1777261 convert-overcloud-undercloud.yml fails on missing > update_containers variable > tripleo Assignee: Sagi (Sergey) Shnaidman Reporter: wes hayutin 17 days > old Tags: promotion-blocker 6 > CRITICAL FIX RELEASED > #1777168 Failures to build python-networking-ovn > tripleo Assignee: Emilien Macchi Reporter: Emilien Macchi 18 days old > Tags: alert ci promotion-blocker 6 > CRITICAL FIX RELEASED > #1777130 RDO cloud is down > tripleo Assignee: Quique Llorente Reporter: Quique Llorente 18 days > old Tags: alert promotion-blocker > -- > > Wes Hayutin > > Associate MANAGER > > Red Hat > > > > w hayutin at redhat.com T: +1919 <+19197544114> > 4232509 IRC: weshay > > > View my calendar and check my availability for meetings HERE > > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Wed Jul 4 05:41:44 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 4 Jul 2018 12:41:44 +0700 Subject: [openstack-dev] [sqlalchemy][db][oslo.db][mistral] Is there a recommended MySQL driver for OpenStack projects? In-Reply-To: <1530621719-sup-3785@lrrr.local> References: <91cbd291-628b-4dfb-8a96-080af2ef6391@Spark> <1530621719-sup-3785@lrrr.local> Message-ID: <186cc6f5-2f7a-4c6a-a80c-dc4612ebfa10@Spark> > > If you have a scaling issue that may be solved by eventlet, that's > one thing, but please don't adopt eventlet just because a lot of > other projects have. We've tried several times to minimize our > reliance on eventlet because new releases tend to introduce bugs. > > Have you tried the 'threading' executor? Yes, we’re trying to solve a scaling issue. Well, I tried “threading” executor also but there’s no visible performance boost. Renat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jul 4 05:51:45 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 4 Jul 2018 07:51:45 +0200 Subject: [openstack-dev] [TripleO] improved privilege escalation in py-scripts Message-ID: Dear all, In order to improve the overall security in TripleO, we're currently creating a couple of specs, aiming Stein version. The first one concerns calls to "sudo" from shell scripts and the like: https://review.openstack.org/572760 The second one concerns privilege escalation inside python scripts: https://review.openstack.org/580033 The short version is "get rid of the NOPASSWD:ALL" scattering the sudoers for a couple of users. Both are still Work In Progress, and need a ton of reviews and discussions in order to get a clear consensus from the community. Thank you for your time and feedback. Cheers, C. -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zhipengh512 at gmail.com Wed Jul 4 08:02:01 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 4 Jul 2018 16:02:01 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.07.04 Message-ID: Hi Team, We will have our weekly meeting as usual at #openstack-cyborg starting UTC1400. The main focus is to align the development status. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From duttaa at hotmail.com Wed Jul 4 10:26:30 2018 From: duttaa at hotmail.com (Abhijit Dutta) Date: Wed, 4 Jul 2018 10:26:30 +0000 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: <16C9C3E5-3EDE-4415-8EFB-CD2035A6CC0F@redhat.com> References: , <16C9C3E5-3EDE-4415-8EFB-CD2035A6CC0F@redhat.com> Message-ID: Hi, Sorry, Neither of those paths are vaild. Still stuck. (attached log generated during installation). ~Thanx Abhijit From: Slawomir Kaplonski Sent: Saturday, June 30, 2018 5:57 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue Hi, In error log there is info that placement API service didn’t start properly. You should then go to placement API logs (/var/log/nova/ or /opt/stack/logs/nova probably) and check there what was wrong with it. > Wiadomość napisana przez Abhijit Dutta w dniu 30.06.2018, o godz. 18:51: > > Hi All, > > Any help here will be appreciated. > > ~Thanx > Abhijit > > From: Abhijit Dutta > Sent: Friday, June 29, 2018 8:10 AM > To: Dr. Jens Harbott (frickler); OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue > > Hi, > > As advised I installed Fedora 27 (Workstation) and tried with the latest version of devstack (pulled from git). However this time I got the following error - > > ./stack.sh:1313:start_placement > /opt/stack/devstack/lib/placement:184:start_placement_api > /opt/stack/devstack/lib/placement:179:die > [ERROR] /opt/stack/devstack/lib/placement:179 placement-api did not start > Error on exit > World dumping... see /opt/stack/logs/worlddump-2018-06-29-071219.txt for details (attached) > > The local.cnf has been configured as: > > [[local|localrc]] > FLOATING_RANGE=192.168.1.224/27 > FIXED_RANGE=10.11.12.0/24 > FIXED_NETWORK_SIZE=256 > FLAT_INTERFACE=eth0 > ADMIN_PASSWORD=supersecret > DATABASE_PASSWORD=iheartdatabases > RABBIT_PASSWORD=flopsymopsy > SERVICE_PASSWORD=iheartksl > > I have configured a static IP which is 192.168.1.201 in my laptop, which has dual core and 3gigs RAM. > > Please let me know, what can cause this error. > > ~Thanx > Abhijit > > > > > From: Dr. Jens Harbott (frickler) > Sent: Wednesday, June 27, 2018 3:53 PM > To: OpenStack Development Mailing List (not for usage questions) > Cc: Abhijit Dutta > Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue > > 2018-06-27 16:58 GMT+02:00 Amy Marrich : > > Abhijit, > > > > I'm forwarding your issue to the OpenStack-dev list so that the right people > > might see your issue and respond. > > > > Thanks, > > > > Amy (spotz) > > > > ---------- Forwarded message ---------- > > From: Abhijit Dutta > > Date: Wed, Jun 27, 2018 at 5:23 AM > > Subject: [openstack-community] DevStack Installation issue > > To: "community at lists.openstack.org" > > > > > > Hi, > > > > > > I am trying to install DevStack for the first time in a baremetal with > > Fedora 28 installed. While executing the stack.sh I am getting the > > following error: > > > > > > No match for argument: Django > > Error: Unable to find a match > > > > Can anybody in the community help me out with this problem. > > We are aware of some issues with deploying devstack on Fedora 28, > these are being worked on, see > https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28 > > If you want a quick solution, you could try deploying on Fedora 27 or > Centos 7 instead. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: worlddump-2018-06-29-071219.txt URL: From gmann at ghanshyammann.com Wed Jul 4 11:10:48 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Jul 2018 20:10:48 +0900 Subject: [openstack-dev] [nova]API update week 28-4 Message-ID: <16464fcd628.1238ed33e3912.7329087506324338162@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== We have re-started the Nova API discussion in office hour. I have updated the wiki page for more information about office hours: https://wiki.openstack.org/wiki/Meetings/NovaAPI What we discussed this week: - This was the first office hours after long time. - Collected the API related BPs on etherpad (rocky-nova-priorities-tracking) for review. - Created the weekly bug report etherpad and we will track down the number there. - Home work for API subteam to at least review 3 in-progress bug patches. - From next week we will do some online bug triage/review or discussion around ongoing BP. Planned Features : ============== Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad- https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Update need another +2 - https://review.openstack.org/#/c/558125/ - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: On Hold. Waiting for spec update to merge first. 2. Abort live migration in queued state: - https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status - https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) - Weekly Progress: Code is up for review. No Review last week. 3. Complex anti-affinity policies: - https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies - https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) - Weekly Progress: Code is up for review. Few reviews done . 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: Waiting to hear from mriedem about his WIP on base patch - https://review.openstack.org/#/c/569649/3 5. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky - Weekly Progress: Good progress. 1/3 part is merged. Bugs: ==== We discussed in office hour to start reviewing the in-progress bugs and minimize the number. From next week, I will show the weekly progress on the bug numbers. Current Bugs Status: Critical bug 0 High importance bugs 2 Status: New bugs 0 Confirmed/Triage 30 In-progress bugs 36 Incomplete: 4 ===== Total: 70 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report -gmann From dtantsur at redhat.com Wed Jul 4 11:24:48 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 4 Jul 2018 13:24:48 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> Message-ID: <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> Tried hard to avoid this thread, but this message is so much wrong.. On 07/03/2018 09:48 PM, Fox, Kevin M wrote: > I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly hard. In fact, it is easier then you might think. k8s is a platform for deploying/managing services. Guess what you need to provision bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe infrastructure. pixiecore with a simple http backend works pretty well in practice. a service to provide installation instructions. nginx server handing out kickstart files for example. and a place to fetch rpms from in case you don't have internet access or want to ensure uniformity. nginx server with a mirror yum repo. Its even possible to seed it on minikube and sluff it off to its own cluster. > > The main hard part about it is currently no one is shipping a reference implementation of the above. That may change... > > It is certainly much much easier then deploying enough OpenStack to get a self hosting ironic working. Side note: no, it's not. What you describe is similarly hard to installing standalone ironic from scratch and much harder than using bifrost for everything. Especially when you try to do it in production. Especially with unusual operating requirements ("no TFTP servers on my network"). Also, sorry, I cannot resist: "Guess what you need to orchestrate containers? Just a few things. A container runtime. Docker works well. some remove execution tooling. ansible works pretty well in practice. It is certainly much much easier then deploying enough k8s to get a self hosting containers orchestration working." Such oversimplications won't bring us anywhere. Sometimes things are hard because they ARE hard. Where are people complaining that installing a full GNU/Linux distributions from upstream tarballs is hard? How many operators here use LFS as their distro? If we are okay with using a distro for GNU/Linux, why using a distro for OpenStack causes so much contention? > > Thanks, > Kevin > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Tuesday, July 03, 2018 10:06 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 07/02/2018 03:31 PM, Zane Bitter wrote: >> On 28/06/18 15:09, Fox, Kevin M wrote: >>> * made the barrier to testing/development as low as 'curl >>> http://......minikube; minikube start' (this spurs adoption and >>> contribution) >> >> That's not so different from devstack though. >> >>> * not having large silo's in deployment projects allowed better >>> communication on common tooling. >>> * Operator focused architecture, not project based architecture. >>> This simplifies the deployment situation greatly. >>> * try whenever possible to focus on just the commons and push vendor >>> specific needs to plugins so vendors can deal with vendor issues >>> directly and not corrupt the core. >> >> I agree with all of those, but to be fair to OpenStack, you're leaving >> out arguably the most important one: >> >> * Installation instructions start with "assume a working datacenter" >> >> They have that luxury; we do not. (To be clear, they are 100% right to >> take full advantage of that luxury. Although if there are still folks >> who go around saying that it's a trivial problem and OpenStackers must >> all be idiots for making it look so difficult, they should really stop >> embarrassing themselves.) > > This. > > There is nothing trivial about the creation of a working datacenter -- > never mind a *well-running* datacenter. Comparing Kubernetes to > OpenStack -- particular OpenStack's lower levels -- is missing this > fundamental point and ends up comparing apples to oranges. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jpena at redhat.com Wed Jul 4 13:33:57 2018 From: jpena at redhat.com (Javier Pena) Date: Wed, 4 Jul 2018 09:33:57 -0400 (EDT) Subject: [openstack-dev] [packaging-rpm] PTL on vacation In-Reply-To: <1141771572.15260061.1530711045239.JavaMail.zimbra@redhat.com> Message-ID: <239151821.15260347.1530711237434.JavaMail.zimbra@redhat.com> Hi, I will be on vacation between July 9th and July 27th, without much e-mail access depending on the specific day. Jakub Ruzicka (jruzicka) will be my deputy during that time. Regards, Javier From sean.mcginnis at gmx.com Wed Jul 4 21:38:41 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 4 Jul 2018 16:38:41 -0500 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: <1530641744-sup-28@lrrr.local> References: <1530641744-sup-28@lrrr.local> Message-ID: <20180704213841.GA10834@sm-workstation> > > Office hour logs from last week: > > * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-27-01.00.html > * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-28-15.00.html > * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-07-03-09.01.html > > In the absence of any feedback about using the meeting bot to record the > office hours, we will continue to do so, for now. > I would say the absence of feedback is an indication that there isn't anyone that sees a strong benefit for doing it this way. Or at least strong enough to step forward and say they prefer it. I would propose based on this lack of feedback that we go back to just having our predesignated office hour times, and anyone interested in catching up on what, if anything, was discussed during office hours can go to the the point in the IRC logs that they are interested in. That also allows for picking up on any other things that were discussed in the channel that ended up before or after someone did the #{start|end}meeting action. From fungi at yuggoth.org Wed Jul 4 22:02:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 Jul 2018 22:02:44 +0000 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: <20180704213841.GA10834@sm-workstation> References: <1530641744-sup-28@lrrr.local> <20180704213841.GA10834@sm-workstation> Message-ID: <20180704220244.7jcqcl7miq7o2tuo@yuggoth.org> On 2018-07-04 16:38:41 -0500 (-0500), Sean McGinnis wrote: [...] > I would propose based on this lack of feedback that we go back to > just having our predesignated office hour times, and anyone > interested in catching up on what, if anything, was discussed > during office hours can go to the the point in the IRC logs that > they are interested in. [...] Heartily seconded. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ekcs.openstack at gmail.com Wed Jul 4 23:48:11 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 4 Jul 2018 23:48:11 +0000 Subject: [openstack-dev] [congress] meeting cancelled Message-ID: Hi all, I’m not going to be able to make the meeting this week. Let’s resume next week =) I’m still available by email. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Thu Jul 5 01:32:30 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 5 Jul 2018 10:32:30 +0900 Subject: [openstack-dev] [OpenStack-I18n] [I18n] IRC Office hours: 2018/07/05 13:00-14:00 UTC In-Reply-To: <0ea3d2b944a798bb07f0e387374eeab1@arcor.de> References: <0ea3d2b944a798bb07f0e387374eeab1@arcor.de> Message-ID: <662f2080-ed2d-8476-c817-c6ccee52e8e8@gmail.com> Hello, I am also available on today office hour - anyone interested in I18n office hours subscribing openstack-dev mailing list is also welcome :) With many thanks, /Ian Frank Kloeker wrote on 7/3/2018 6:30 PM: > Hello, > > good to be back. Still sorting emails, messages and things. Let's meet > on Thursday in our team meeting in the new Office Hour format: > 2018/07/05 13:00-14:00 UTC > If you have anything to share please let us know on wiki page [1] > > kind regards > > Frank > > > [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n From emilien at redhat.com Thu Jul 5 01:51:20 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 4 Jul 2018 19:51:20 -0600 Subject: [openstack-dev] [puppet][tripleo] Why is this acceptance test failing? In-Reply-To: <20180704012937.6eoffaxeeeq4oadg@redhat.com> References: <20180704012937.6eoffaxeeeq4oadg@redhat.com> Message-ID: The actual problem is that the manifest isn't idempotent anymore: http://logs.openstack.org/47/575147/16/check/puppet-openstack-beaker-centos-7/3f70cc9/job-output.txt.gz#_2018-07-04_00_42_19_705516 So something in your patch is breaking the Keystone_domain provider and makes it non idempotent. On Tue, Jul 3, 2018 at 7:30 PM Lars Kellogg-Stedman wrote: > I need another set of eyes. > > I have a review that keeps failing here: > > > http://logs.openstack.org/47/575147/16/check/puppet-openstack-beaker-centos-7/3f70cc9/job-output.txt.gz#_2018-07-04_00_42_19_696966 > > It's looking for the regular expression: > > /Puppet::Type::Keystone_tenant::ProviderOpenstack: Support for a > resource without the domain.*using 'Default'.*default domain id is '/ > > The output shown in the failure message contains: > > [1;33mWarning: Puppet::Type::Keystone_tenant::ProviderOpenstack: > Support for a resource without the domain set is deprecated in > Liberty cycle. It will be dropped in the M-cycle. Currently using > 'Default' as default domain name while the default domain id is > '7ddf1dfa7fac46679ba7ae2245bece2f'.[0m > > The regular expression matches the text! The failing test is here: > > > https://github.com/openstack/puppet-keystone/blob/master/spec/acceptance/default_domain_spec.rb#L59 > > I've been staring at this for a while and I'm not sure what's going > on. > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Jul 5 02:52:22 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 4 Jul 2018 22:52:22 -0400 Subject: [openstack-dev] [puppet][tripleo] Why is this acceptance test failing? In-Reply-To: References: <20180704012937.6eoffaxeeeq4oadg@redhat.com> Message-ID: <20180705025222.qhubpmtyyfrrhbuk@redhat.com> On Wed, Jul 04, 2018 at 07:51:20PM -0600, Emilien Macchi wrote: > The actual problem is that the manifest isn't idempotent anymore: > http://logs.openstack.org/47/575147/16/check/puppet-openstack-beaker-centos-7/3f70cc9/job-output.txt.gz#_2018-07-04_00_42_19_705516 Hey Emilien, thanks for taking a look. I'm not following -- or maybe I'm just misreading the failure message. It really looks to me as if the failure is caused by a regular expression; it says: Failure/Error: apply_manifest(pp, :catch_changes => true) do |result| expect(result.stderr) .to include_regexp([/Puppet::Type::Keystone_tenant::ProviderOpenstack: Support for a resource without the domain.*using 'Default'.*default domain id is '/]) end And yet, the regular expression in that check clearly matches the output shown in the failure message. What do you see that points at an actual idempotency issue? (I wouldn't be at all surprised to find an actual problem in this change; I've fixed several already. I'm just not sure how to turn this failure into actionable information.) -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From thierry at openstack.org Thu Jul 5 07:51:17 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 5 Jul 2018 09:51:17 +0200 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: <20180704220244.7jcqcl7miq7o2tuo@yuggoth.org> References: <1530641744-sup-28@lrrr.local> <20180704213841.GA10834@sm-workstation> <20180704220244.7jcqcl7miq7o2tuo@yuggoth.org> Message-ID: Jeremy Stanley wrote: > On 2018-07-04 16:38:41 -0500 (-0500), Sean McGinnis wrote: > [...] >> I would propose based on this lack of feedback that we go back to >> just having our predesignated office hour times, and anyone >> interested in catching up on what, if anything, was discussed >> during office hours can go to the the point in the IRC logs that >> they are interested in. > [...] > > Heartily seconded. Thirded. Office hours were meant to encourage gathering around specific times (1) to increase the odds of reaching critical mass necessary for discussion, and (2) to ensure presence for outsiders wanting to reach out to the TC. The meeting bot enforces a "start" and an "end" to the discussion. It makes the hour busier. It encourages the discussion to stop rather than to continue outside of the designated times. It discourages random discussions outside the hour (since it won't be logged the same). And imho discourages external questions (since they would be "on the record" and interrupt busy discussions). So yes, I would prefer it to end. Since I don't like to shut down an experiment without proposing something else, here would be my suggestion. I would like to see a middle way between raw logs and meeting reports -- a way to take notes on a discussion channel the same way we document a meeting, but without a start, an end. Automatically producing a report with #info #agree #links every day or week, not changing topics or requiring chairs, or start/endmeeting. Then you get the benefit of a summary (with links to raw logs) without constraining the discussion to specific "hours". If we are good at documenting, it might even reduce the need to read all logs for the channel -- just check the summary for interesting mentions and follow links if interested. The bot could even serve yesterday's report to you in privmsg if you asked for it. That feature would, I believe, be reused in other channels. -- Thierry Carrez (ttx) From tobias.rydberg at citynetwork.eu Thu Jul 5 07:58:26 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 5 Jul 2018 09:58:26 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From jean-daniel.bonnetot at corp.ovh.com Thu Jul 5 08:02:34 2018 From: jean-daniel.bonnetot at corp.ovh.com (Jean-Daniel Bonnetot) Date: Thu, 5 Jul 2018 08:02:34 +0000 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG In-Reply-To: References: Message-ID: <6AE18849-2F98-419B-833E-678C744A8CDE@corp.ovh.com> Sorry guys, I'm not available once again. See you next time. Jean-Daniel Bonnetot ovh.com | @pilgrimstack On 05/07/2018 09:59, "Tobias Rydberg" wrote: Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tdecacqu at redhat.com Thu Jul 5 09:17:17 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 05 Jul 2018 09:17:17 +0000 Subject: [openstack-dev] [all] log-classify project update (anomaly detection in CI/CD logs) In-Reply-To: <1530601298.luby16yqut.tristanC@fedora> References: <1530601298.luby16yqut.tristanC@fedora> Message-ID: <1530780669.k1udih7bo7.tristanC@fedora> On July 3, 2018 7:39 am, Tristan Cacqueray wrote: [...] > There is a lot to do and it will be challening. To that effect, I would > like to propose an initial meeting with all interested parties. > Please register your irc name and timezone in this etherpad: > > https://etherpad.openstack.org/p/log-classify > So far, the mean timezone is UTC+1.75, I've added date proposal from the 16th to the 20th of July. Please adds a '+' to the one you can attend. I'll follow-up next week with an ical file for the most popular. Thanks, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doug at doughellmann.com Thu Jul 5 13:19:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 09:19:23 -0400 Subject: [openstack-dev] [release-announce][ironic] ironic 11.0.0 (rocky) In-Reply-To: References: Message-ID: <1530796641-sup-5765@lrrr.local> I want to compliment the Ironic team on writing such engaging and comprehensive release notes. Nice work! Doug Excerpts from no-reply's message of 2018-07-05 10:24:19 +0000: > We are gleeful to announce the release of: > > ironic 11.0.0: OpenStack Bare Metal Provisioning > > This release is part of the rocky release series. > > The source is available from: > > https://git.openstack.org/cgit/openstack/ironic > > Download the package from: > > https://tarballs.openstack.org/ironic/ > > Please report issues through launchpad: > > https://bugs.launchpad.net/ironic > > For more details, please see below. > > 11.0.0 > ^^^^^^ > > > Prelude > ******* > > I R O N I C turns the dial to *11* In preparation for the OpenStack > Rocky development cycle release, the "ironic" Bare Metal as a Service > team announces the release of version 11.0. While it is not quite like > a volume knob, this release lays the foundation for features coming in > future releases and user experience enhancements. Some of these > include the BIOS configuration framework, power fault recovery, > additonal error handling, refactoring, removal of classic drivers, and > many bug fixes. > > > New Features > ************ > > * Adds the healthcheck middleware from oslo, configurable via the > "[healthcheck]/enabled" option. This middleware adds a status check > at */healthcheck*. This is useful for load balancers to determine if > a service is up (and add or remove it from rotation), or for > monitoring tools to see the health of the server. This endpoint is > unauthenticated, as not all load balancers or monitoring tools > support authenticating with a health check endpoint. > > * Adds support to abort the inspection of a node in the "inspect > wait" state, as long as this operation is supported by the inspect > interface in use. A node in the "inspect wait" state accepts the > "abort" provisioning verb to initiate the abort process. This > feature is supported by the "inspector" inspect interface and is > available starting with API version 1.41. > > * Adds support for reading and changing the node's "bios_interface" > field and enables the GET endpoints to check BIOS settings, if they > have already been cached. This requires a compatible > "bios_interface" to be set. This feature is available starting with > API version 1.40. > > * The new ironic configuration setting "[deploy]/default_boot_mode" > allows the operator to set the default boot mode when ironic can't > pick boot mode automatically based on node configuration, hardware > capabilities, or bare-metal machine configuration. > > * Adds support to the "redfish" management interface for reading and > setting bare metal node's boot mode. > > * Adds new Power Distribution Unit (PDU) "snmp" driver type - > BayTech MRP27. > > * Adds new "auto" type of the "driver_info/snmp_driver" setting > which makes ironic automatically select a suitable SNMP driver type > based on the "SNMPv2-MIB::sysObjectID" value as reported by the PDU > being managed. > > * Adds SNMPv3 message authentication and encryption features to > ironic "snmp" hardware type. To enable these features, the following > parameters should be used in the node's "driver_info": > > * "snmp_user" > > * "snmp_auth_protocol" > > * "snmp_auth_key" > > * "snmp_priv_protocol" > > * "snmp_priv_key" > > Also adds support for the "context_engine_id" and "context_name" > parameters of SNMPv3 message at ironic "snmp" hardware type. They > can be configured in the node's "driver_info". > > * Add "?detail=" boolean query to the API list endpoints to provide > a more RESTful alternative to the existing "/nodes/detail" and > similar endpoints. The default is False. Now these API requests are > possible: > > * "/nodes?detail=True" > > * "/ports?detail=True" > > * "/chassis?detail=True" > > * "/portgroups?detail=True" > > * Adds "external" storage interface which is short for "externally > managed". This adds logic to allow the Bare Metal service to > identify when a BFV scenario is being requested based upon the > configuration set for "volume targets". > > The user must create the entry, and no syncronizaiton with a Block > Storage service will occur. Documentation > (https://docs.openstack.org/ironic/latest/admin/boot-from- > volume.html#use-without-cinder) has been updated to reflect how to > use this interface. > > * Adds the "[deploy]enable_ata_secure_erase" option which allows an > operator to disable ATA Secure Erase for all nodes being managed by > the conductor. This setting defaults to "True" which aligns with the > prior behavior of the Bare Metal service. > > * Adds new parameter fields to driver_info, which will become > mandatory in Stein release: > > * "xclarity_manager_ip": IP address of the XClarity Controller. > > * "xclarity_username": Username for the XClarity Controller. > > * "xclarity_password": Password for XClarity Controller username. > > * "xclarity_port": Port to be used for XClarity Controller > connection. > > * Adds support for the "ipmitool" power interface to the "irmc" > hardware type. > > * Adds support for the "fault" field in the node, beginning with API > version 1.42. This field records the fault, if any, detected by > ironic for a node. If no fault is detected, the "fault" is "None". > The "fault" field value is set to one of following values according > to different circumstances: > > * "power failure": when a node is put into maintenance due to > power sync failures that exceed max retries. > > * "clean failure": when a node is put into maintenance due to > failure of a cleaning operation. > > * "rescue abort failure": when a node is put into maintenance due > to failure of cleaning up during rescue abort. > > The "fault" field will be set to "None" if an operator manually set > maintenance to "False". The "fault" field can be used as a filter > for querying nodes. > > * Adds power failure recovery to ironic. For nodes that ironic had > put into maintenance mode due to power failure, ironic periodically > checks their power state, and moves them out of maintenance mode > when power state can be retrieved. The interval of this check is > configured via "[conductor]power_failure_recovery_interval" > configuration option, the default value is 300 (seconds). Set to 0 > to disable this behavior. > > * Adds support for RAID 1 creation on Dell Boot Optimized Storage > Solution (BOSS). > > * Adds support for rescue interface "agent" for the "ilo" hardware > type when the corresponding boot interface being used is "ilo- > virtual-media". The supported values of the rescue interface for the > "ilo" hardware type are "agent" and "no-rescue". The default value > is "no-rescue". > > * Adds support for rescue interface "agent" for the "irmc" hardware > type when the corresponding boot interface is "irmc-virtual-media". > The supported values of rescue interface for "irmc" hardware type > are "agent" and "no-rescue". The default value is "no-rescue". > > * Issuing a SIGHUP (e.g. "pkill -HUP ironic") to an ironic-api or > ironic-conductor service will cause the service to reload and use > any changed values for *mutable* configuration options. The mutable > configuration options are: > > * [DEFAULT]/debug > > * [DEFAULT]/log_config_append > > * [DEFAULT]/pin_release_version > > Mutable configuration options are indicated as such in the sample > configuration file > (https://docs.openstack.org/ironic/latest/configuration/sample- > config.html) by "Note: This option can be changed without > restarting". > > A warning is logged for any changes to immutable configuration > options. > > > Upgrade Notes > ************* > > * Adds an "inspect wait" state to handle asynchronous hardware > introspection. Caution should be taken due to the timeout monitoring > is shifted from "inspecting" to "inspect wait", please stop all > running asynchronous hardware inspection or wait until it is > finished before upgrading to the Rocky release. Otherwise nodes in > asynchronous inspection will be left at "inspecting" state forever > unless the database is manually updated. > > * Extends the "instance_info" column in the nodes table for > MySQL/MariaDB from up to 64KiB to up to 4GiB (type is changed from > TEXT to LONGTEXT). This upgrade will not be executed on PostgreSQL > as its TEXT is unlimited. > > * To use CoreOS based deploy/cleaning ramdisk built using Ironic > Python Agent from the Rocky release, Ironic should be upgraded to > the Rocky release if PXE is used. Otherwise, a node cannot be > deployed or cleaned because the IPA fails to boot due to an > unsupported parameter passed via PXE. See bug 2002093 > (https://storyboard.openstack.org/#!/story/2002093) for details. > > * With the deploy ramdisk based on Ironic Python Agent version 3.1.0 > and beyond, the drivers using "direct" deploy interface performs > "netboot" or "local" boot for whole disk image based on value of > boot option setting. When you upgrade Ironic Python Agent in your > deploy ramdisk, ensure that boot option is set appropriately for the > node. The boot option can be set using configuration > "[deploy]/default_boot_option" or as a "boot_option" capability in > node's "properties['capabilities']". Also please note that this > functionality requires "hexdump" command in the ramdisk. > > * "ironic-dbsync online_data_migrations" will migrate any port's and > port group's extra['vif_port_id'] value to their > internal_info['tenant_vif_port_id']. For API versions >= 1.28, the > ability to attach/detach the VIF via the port's or port group's > extra['vif_port_id'] will not be supported starting with the Stein > release. > > Any out-of-tree network interface implementations that had a > different behavior in support of attach/detach VIFs via the port or > port group's extra['vif_port_id'] must be updated appropriately. > > * It is no longer possible to load a classic driver. Only hardware > types are supported from now on. > > * The "/v1/drivers/?type=classic" API always returns an empty list > since classic drivers can no longer be loaded. > > * The deprecated iDRAC classic drivers "pxe_drac" and > "pxe_drac_inspector" have been removed. Please use the "idrac" > hardware type. > > * The deprecated iLO classic drivers "pxe_ilo", "iscsi_ilo" and > "agent_ilo" have been removed. Please use the "ilo" hardware type. > > * The deprecated classic drivers "pxe_ipmitool" and "agent_ipmitool" > have been removed. Please use the "ipmi" hardware type instead. > > * The deprecated classic drivers "pxe_irmc", "agent_irmc" and > "iscsi_irmc" have been removed. Please use the "irmc" hardware type. > > * The deprecated classic drivers "iscsi_pxe_oneview" and > "agent_pxe_oneview" have been removed. Please use the "oneview" > hardware type. > > * The deprecated "pxe_snmp" classic driver has been removed. Please > use the "snmp" hardware type instead. > > * The deprecated classic drivers "pxe_ucs" and "agent_ucs" have been > removed. Please use the "cisco-ucs-managed" hardware type. > > * The deprecated classic drivers "pxe_iscsi_cimc" and > "pxe_agent_cimc" have been removed. Please use the "cisco-ucs- > standalone" hardware type. > > * All fake classic drivers, deprecated in the Queens release, have > been removed. This includes: > > * "fake" > > * "fake_agent" > > * "fake_cimc" > > * "fake_drac" > > * "fake_ilo" > > * "fake_inspector" > > * "fake_ipmitool" > > * "fake_ipmitool_socat" > > * "fake_irmc" > > * "fake_oneview" > > * "fake_pxe" > > * "fake_snmp" > > * "fake_soft_power" > > * "fake_ucs" > > Please use the "fake-hardware" hardware type instead (you can > combine it with any other interfaces, fake or real). > > * Adds a new configuration option "[disk_utils]partprobe_attempts" > which defaults to 10. This is the maximum number of times to try to > read a partition (if creating a config drive) via a "partprobe" > command. Set it to 1 if you want the previous behavior, where no > retries were done. > > * Power failure recovery introduces a new configuration option > "[conductor]power_failure_recovery_interval", which is enabled and > set to 300 seconds by default. In case the default value is not > suitable for the needs or scale of a deployment, please make > adjustment or turn it off during upgrade. > > * Power failure recovery does not apply to nodes that were in > maintenance mode due to power failure before upgrade, they have to > be manually moved out of maintenance mode. > > * Deprecated options "ansible_deploy_username" and > "ansible_deploy_key_file" in node driver_info for the "ansible" > deploy interface were removed and will be ignored. Use > "ansible_username" and "ansible_key_file" options in the node > driver_info respectively. > > * The behavior for retention of VIF interface attachments has > changed. > > If your use of the Bare Metal service is reliant upon the behavior > of the VIFs being retained, which was introduced as a behavior > change during the Ocata cycle, then you must update your tooling to > explicitly re-add the VIF attachments prior to deployment. > > * Deprecated option "[keystone]\region_name" was removed and will be > ignored. Instead use "region_name" option in other sections related > to contacting other services ("[service_catalog]", "[cinder]", > "[glance]", "[neutron]", ["swift"] and "[inspector]"). > > As the option "[keystone]\region_name" was the only option in > "[keystone]" section of ironic configuration file, this section was > removed as well. > > > Deprecation Notes > ***************** > > * Adds an "inspect wait" state to handle asynchronous hardware > introspection. The "[conductor]inspect_timeout" configuration option > is deprecated for removal, please use > "[conductor]inspect_wait_timeout" instead to specify the timeout of > inspection process. > > * Deprecates the "snmp_security" field in "driver_info" for ironic > "snmp" hardware type, it will be removed in Stein release. Please > use "snmp_user" field instead. > > * The "[inspector]enabled" configuration option is deprecated. It > only affected classic drivers, and with their removal it no longer > has any effect. Use the "enabled_inspect_interfaces" option to > enable/disable support for ironic-inspector. > > * The "oneview" hardware type, as well as the supporting driver > interfaces have been deprecated and are scheduled to be removed from > ironic in the Stein development cycle. This is due to the lack of > operational Third Party testing to help ensure that the support for > Oneview is functional. Oneview Third Party CI was shutdown just > prior to the start of the Rocky development cycle, and at the time > of this deprecation the Ironic community has no indication that > testing will be restablished. Should testing be restablished, this > deprecation shall be rescinded. > > * Configuration options "[xclarity]/manager_ip", > "[xclarity]/username", and "[xclarity]/password" are deprecated and > will be removed in the Stein release. > > * The "enabled_drivers" option is now deprecated. Since classic > drivers can no longer be loaded, setting this option to anything > non-empty will result in the conductor failing to start. > > > Security Issues > *************** > > * Fixes an issue where an enabled console could be left running > after a node was unprovisioned. This allowed a user to view the > console even after the instance was gone. Ironic now stops the > console during unprovisioning to block this. > > * Xclarity password specified in configuration file is now properly > masked during logging. > > > Bug Fixes > ********* > > * Fixes bug 1749755 (https://bugs.launchpad.net/ironic/+bug/1749755) > causing timeouts to not work properly because an unsupported > sqalchemy filter was being used. > > * Adds more "ipmitool" error messages to be treated as retryable by > the ipmitool interfaces (such as power and management hardware > interfaces). Specifically, "Node busy", "Timeout", "Out of space" > and "BMC initialization in progress" reporting emitted by "ipmitool" > will cause ironic to retry IPMI command. This change should improve > the reliability of IPMI-based communicaton with BMC. > > * If the bare metal machine's boot mode differs from the requested > one, ironic will now attempt to set requested boot mode on the bare > metal machine and fail explicitly if the driver does not support > setting boot mode on the node. > > * The config drive passed to the node can now contain more than > 64KiB in case of MySQL/MariaDB. For more details see bug 1596421 > (https://bugs.launchpad.net/ironic/+bug/1596421). > > * Fixes a bug preventing a node from booting into the user instance > after unrescuing if instance netboot is used. See bug 1749433 > (https://bugs.launchpad.net/ironic/+bug/1749433) for details. > > * Fixes rescue timeout due to incorrect kernel parameter in the iPXE > script. See bug 1749860 > (https://bugs.launchpad.net/ironic/+bug/1749860) for details. > > * Fixes a bug where a node's hardware type cannot be changed to > another hardware type which doesn't support any hardware interface > currently used. See bug 2001832 > (https://storyboard.openstack.org/#!/story/2001832) for details. > > * Fixes a bug that exposes an internal node ID in an error message > when requested to delete a trait which doesn't exist. See bug > 2002062 (https://storyboard.openstack.org/#!/story/2002062) for > details. > > * When a conductor managing a node dies mid-cleaning the node would > get stuck in the CLEANING state. Now upon conductor startup nodes in > the CLEANING state will be moved to the CLEANFAIL state. > > * Fixes an issue where parameters required in driver_info and > descriptions in documentation are different. > > * Fixes an issue with validation of Infiniband ports. Infiniband > ports do not require the "local_link_connection" field to be > populated as the network topology is discoverable by the Infiniband > Subnet Manager. See bug 1753222 (https://launchpad.net/bugs/1753222) > for details. > > * Fixes an issue where RAID 10 creation fails with greater than 16 > drives when using the "idrac" hardware type. See bug 2002771 > (https://storyboard.openstack.org/#!/story/2002771) for details. > > * Adds missed noop implementations (e.g. "no-inspect") to the "fake- > hardware" hardware type. This fixes enabling this hardware type > without enabling all (even optional) "fake" interfaces. > > * Fixes an issue seen during cleaning when the node being cleaned > has one or more traits assigned. This issue caused cleaning to fail, > and the node to enter the "clean failed" state. See bug 1750027 > (https://bugs.launchpad.net/ironic/+bug/1750027) for details. > > * Fixes an issue with iPXE where the incorrect iscsi volume > authentication data was being used with boot from volume when multi- > attach volumes were present. > > * Fixes "direct" deploy interface to invoke "boot.prepare_instance" > irrespective of image type being provisioned. It was calling > "boot.prepare_instance" only if the image being provisioned is a > partition image. See bugs 1713916 > (https://storyboard.openstack.org/#!/story/1713916) and 1750958 > (https://storyboard.openstack.org/#!/story/1750958) for details. > > * Fixes the HTTP response code for a validation failure when > attempting to move an ironic node to the active state. Validation > failure in this scenario now responses with a 400 status code > correctly indicating a user input error. > > * Fixes an issue where node ramdisk heartbeat operations would > collide with conductor locks and erroniously record an error in > node's "last_error" field. > > * Fixes collection of periodic tasks from hardware interfaces that > are not used in any enabled classic drivers. See bug 2001884 > (https://storyboard.openstack.org/#!/story/2001884) for details. > > * The periodic tasks for the "inspector" inspect interface are no > longer disabled if the "[inspector]enabled" option is not set to > "True". The help string of this option claims that it does not apply > to hardware types. In any case, the periodic tasks are only run if > any enabled classic driver or hardware interface requires them. > > * Fixes a compatability issue where the iPXE kernel command line was > no longe compatible with dracut. The "ip" parameter has been removed > as it is incompatible with the "BOOTIF" and missing "autoconf" > parameters when dracut is used. Further details can be found in > storyboard (https://storyboard.openstack.org/#!/story/2001969). > > * Fixes empty "last_error" field on cleaning failures. > > * Fixes an issue where only nodes in "DEPLOYING" state would have > locks cleared for the nodes. Now upon node take over, any locks that > are left from the old conductor are cleared by the new one. > > * Adds a new configuration option "[disk_utils]partprobe_attempts" > which defaults to 10. This is the maximum number of times to try to > read a partition (if creating a config drive) via a "partprobe" > command. Previously, no retries were done which caused failures. > This addresses bug 1756760 > (https://storyboard.openstack.org/#!/story/1756760). > > * Fixes rare race condition which resulted in the port list API > returning HTTP 400 (bad request) if some nodes were being removed in > parallel. See bug 1748893 (https://bugs.launchpad.net/bugs/1748893) > for details. > > * Fixes an issue where no error was raised if there were no PXE- > enabled ports available for the node, when creating a neutron port. > See bug 2001811 (https://storyboard.openstack.org/#!/story/2001811) > for more details. > > * Fixes potential case of VIF records being orphaned as the service > now removes all records of VIF attachments upon the teardown of a > deployed node. This is in order to resolve issues related to where > it is operationally impossible in some circumstances to remove a VIF > attachment while a node is being undeployed as the Compute service > will only attempt to remove the VIF for five minutes. > > See bug 1743652 (https://bugs.launchpad.net/ironic/+bug/1743652) for > more details. > > * Ironic API now returns "503 Service Unavailable" for action > requiring a conductor when no conductors are online. Bug: 2002600 > (https://storyboard.openstack.org/#!/story/2002600). > > * Fixes an issue seen during node tear down where a port being > deleted by the Bare Metal service could be deleted by the Compute > service, leading to an unhandled error from the Networking service. > See story 2002637 for further details. > > * Fixes an issue where the "ilo" hardware type would not properly > update the boot mode on the bare metal machine for cleaning as per > given "boot_mode" in node's properties/capabilities. See bug 1559835 > (https://bugs.launchpad.net/ironic/+bug/1559835) for more details. > > * During node cleaning, the conductor was using a cached copy of the > node's driver_internal_info field. It is possible that the copy is > outdated, which would cause issues with the state of the node. This > has been fixed. For more information, see bug 2002688 > (https://storyboard.openstack.org/#!/story/2002688). > > * Fixes an issue where a node's "instance_info.traits" field could > be incorrectly formatted, or contain traits that are not traits of > the node. When validating drivers and prior to deployment, the Bare > Metal service now validates that a node's traits include all the > traits in its "instance_info.traits" field. See bug 1755146 > (https://bugs.launchpad.net/ironic/+bug/1755146) for details. > > * Reverts the fix for orphaned VIF records from the previous > release, as it causes a regression. See bug 1750785 > (https://bugs.launchpad.net/ironic/+bug/1750785) for details. > > > Other Notes > *********** > > * Adds an "inspect wait" state to handle asynchronous hardware > introspection. Returning "INSPECTING" from the "inspect_hardware" > method of inspect interface is deprecated, "INSPECTWAIT" should be > returned instead. > > * Adds "get_boot_mode", "set_boot_mode" and > "get_supported_boot_modes" methods to driver management interface. > Drivers can override these methods implementing boot mode management > calls to the BMC of the baremetal nodes being managed. > > * Adds new method "validate_rescue()" to boot interface to validate > node's properties related to rescue operation. This method is called > by the validate() method of rescue interface. > > * For out-of-tree drivers that have vendor passthru methods > (https://docs.openstack.org/ironic/latest/contributor/vendor- > passthru.html). The "async" parameter of the "passthru" and > "driver_passthru" decorators is deprecated and will be removed in > the Stein cycle. Please use its replacement instead, the > "async_call" parameter. For more information, see bug 1751306 > (https://storyboard.openstack.org/#!/story/1751306). > > * The conductor no longer tries to collect or report sensors data > for nodes in maintenance mode. See bug 1652741 > (https://bugs.launchpad.net/bugs/1652741). > > * On taking over nodes in "CLEANING" state, the new conductor moves > them to the "CLEAN FAIL" state and sets maintenance. > > * Removes the software metric named > "validate_boot_option_for_trusted_boot". This was the timing for a > short-lived, internal function that is already included in the > "PXEBoot.validate" metric. > > Changes in ironic 10.1.0..11.0.0 > -------------------------------- > > 53e7bae Remove support for creating and loading classic drivers > 9d049f7 Add a prelude for version 11 > c5fbf07 iDRAC RAID10 creation with greater than 16 drives > 778662a Remove doc of classic drivers from the admin guide > 194d042 Modifying 'whole_disk_image_url' and 'whole_disk_image_checksum' variable > 51ab42e Follow-up to update doc for oneview driver > 080f656 Small change of doc title for the drivers > a1fb291 Fix wrong in apidoc_excluded_paths > eac6834 Follow-up to update doc for ilo driver > ba0a782 Add BayTech MRP27 snmp driver type > bdd8d23 Follow-up to update doc for irmc driver > bd003c6 DevStack: Tiny changes following iRMC classic driver removal > 5b4ce3d include all versions of Node in release_mappings > 53048b9 Deprecate [inspector]enabled option > 2e568bd Do not disable inspector periodic tasks if [inspector]enabled is False > 1a07137 Remove the ipmitool classic drivers > 80d6c14 Add snmp driver auto discovery > a896cc4 During cleaning, use current node.driver_internal_info > ce444aa Rename test class > 3d8f3ec Remove the iRMC classic drivers > 384f966 Remove the OneView classic drivers > 6deb0c3 Remove the deprecated pxe_snmp driver > 575640c Remove the deprecated classic drivers for Cisco UCS hardware > 09e89c0 Remove the iDRAC classic drivers > 10bc397 Separate unit tests into different classes > 3f94f5d Add helper method for testing node fields > 6c301e7 Fix conductor manager unit tests > 9c7729d Remove the ilo classic drivers > b70b38e Move parse_instance_info_capabilities() to common utils.py > bfed31b Fix error when deleting a non-existent port > efa064b BIOS Settings: update admin doc > 1b295f2 BIOS Settings: add bios_interface field in NodePayload > 6acb6d9 BIOS Settings: update default BIOS setting version in db utils > 176942c Add documentation for XClarity Driver > b2ecd08 Release note clean-ups for ironic release > e3d6681 Move boot-related code to boot_mode_utils.py > 82fe2cb Raise TemporaryFailure if no conductors are online > b17c528 BIOS Settings: add sync_node_setting > 01a9016 Fix for Unable to create RAID1 on Dell BOSS card > 5795c57 Add an external storage interface > 8c6010d fix typos > 0b85240 fix typos > 233d7d5 Add detail=[True, False] query string to API list endpoints > 6b0290e Adds enable_ata_secure_erase option > 2d3e7e9 Remove the remaining fake drivers > 0b40813 Document that nova-compute attaches VIF to active nodes on start up > 7c5f655 Added Redfish boot mode management > aaf17eb iRMC: Support ipmitool power interface with irmc hardware > 2822e05 Doc: Remove -r option for running a specific unit test > de6cfdb Fix stestr has no lower bound in test-requirements > 5e8f2e3 Adds boot mode support to ManagementInterface > d0dca90 Modify the Ironic api-ref's parameters in parameters.yaml > 39d8b76 rectify 'a image ID' to 'an image ID' > 3a39431 change 'a ordinary file ' to 'an ordinary file' > 9a1dc71 Validating fault value when querying with fault field > 0b0e257 change 'a optional path' to 'an optional path' > 4f04124 Update links in README > 495d738 Remove the fake_ipmitool, fake_ipmitool_socat and fake_snmp drivers > 970f45a Add release notes link to README > a8c425a BIOS Settings: add admin doc > 47c2b15 Remove deprecated [keystone] config section > f40f145 Make method public to support out-of-band cleaning > 05e6dff Remove the fake_agent, fake_pxe and fake_inspector drivers > 500ca21 Consolidate the setting of ironic-extra-vars > 1a3a2c4 Remove deprecated ansible driver options > a64e119 Remove dulicate uses for zuul-cloner > 64a90a6 Comply with PTI for Python testing > 3a0fc77 fix tox python3 overrides > d951976 Remove the "fake" and "fake_soft_power" classic drivers > 7fcca34 Completely stop using the "fake" classic driver in unit tests > 8ee2f4b Power fault recovery follow up > 4d020a6 Adds more `ipmitool` errors as retryable > 2e7b2ba Stop using pxe_ipmitool in grenade > 4bc142e Fix FakeBIOS to allow tempest testing > 0c29837 Power fault recovery: Notification objects > b4c4eb9 Power fault recovery: API implementation > 146bbb4 Add mock to doc requirements to fix doc build > 1b5de91 Fix task_manager process_event docstring > bce7f11 Implements baremetal inspect abort > 5dcfac0 Add the ability to setup enabled bios interfaces in devstack > fd805e2 [Doc] Scheduling needs validated 'management' interface > d27b276 Fix authentication issues along with add multi extra volumes > ca92183 Stop passing IP address to IPA by PXE > 254d370 Add Node BIOS support - REST API > 2288645 Follow up to power fault recovery db tests > 0a1b165 Power fault recovery: apply fault > bae9e82 Reraise exception with converting node ID > 44f4768 Gracefully handle NodeLocked exceptions during heartbeat > 635f4a9 SNMPv3 security features added to the `snmp` driver > be1b6a3 Allow customizing libvirt NIC driver > a8e6fae Convert conductor manager unit tests to hardware types > a684883 Remove excessive usage of mock_the_extension_manager in unit tests - part 2 > 1d0f90c Improve exception handling in agent_base_vendor > 580d433 Check pep8 without ignoring D000 > d6deb1e Missing import of "_" > 6b44f26 Power fault recovery: db and rpc implementation > af7c6c4 Change exception msg of BIOS caching > 86a5a16 Remove excessive usage of mock_the_extension_manager in unit tests - part 1 > 2d46f48 Mark xclarity password as secret > 2846852 Fix E501 errors > 4197744 Fix tenant DeprecationWarning from oslo_context > f88d993 Fix tenant DeprecationWarning from oslo_context > 7a8b26d Tear down console during unprovisioning > f6dd50d Fix XClarity parameters discrepancy > 1a59ef9 Follow up to inspect wait implementation > 5839bba Silence F405 errors > 24c04d9 Fix W605 Errors > adaf918 Fix E305 Errors > 530a3ed Fix W504 errors > 3f460ba Gate fix: Cap hacking to avoid gate failure > 3048eb8 Preserve env when running vbmc > 6ff9a6b Make validation failure on node deploy a 4XX code > d2f2afa Install OSC during quickstart > 02d8fa1 Ignore new errors until we're able to fix them > d017153 BIOS Settings: Add BIOS caching > 1e24ef9 BIOS Settings: Add BIOSInterface > 02aad83 Remove ip parameter from ipxe command line > 863aa34 Clarify image_source with BFV > f2502cc Update install guide to require resource classes > 6d84922 Fix error thrown by logging in common/neutron.py > 0f404fa Add note to oneview docs re: derprecation > a6ae98f Deprecate Oneview > 2d2298a Switch to the fake-hardware hardware type for API tests > 3ae836d Remove the Keystone API V2.0 endpoint registration > ee04f56 Move API (functional) tests to separate jobs > d741556 Add unit test for check of glance image status > 7784f40 Devstack plugin support for Redfish and Hardware > 7ead206 Collect periodic tasks from all enabled hardware interfaces > acdc372 Stop verifying updated driver in creating task > 9eaff34 BIOS Settings: Add RPC object > 91251d1 fix a typo > 909c267 Trivial: Update pypi url to new url > 36ac298 Add more parameter explanation when create a node > 97fdd62 Fix test_get_nodeinfo_list_with_filters > 26694e0 Install reno to venv for creating release note > c6789ea Stop removing root uuid in vendor interfaces > 4fa1075 Fix ``agent`` deploy interface to call ``boot.prepare_instance`` > 05dd405 Update wording used in removal of VIFs > b27396d [devstack] Switch ironic to uWSGI > 5dda4ba Make ansible error message clearer > 61b04cf BIOS Settings: Add DB API > c7e938c BIOS Settings: Add bios_interface db field > 3ca9ec5 BIOS Settings: Add DB model > 5c1d5a8 Clean up driver_internal_info after tear_down > 75b654c Run jobs if requirements change > 3a4e259 Remove vifs upon teardown > 40a3fea uncap eventlet > 655038b Update auth_uri option to www_authenticate_uri > 6b91ba2 Resolve pep8 E402 errors and no longer ignore E402 > ca91d4d Remove pycodestyle version pin. Add E402 and W503 to ignore. > fc15be6 Pin pycodestyle to <=2.3.1 > 804349e Check for PXE-enabled ports when creating neutron ports > 6df82ee Implementation of inspect wait state > 006950e Update Launchpad references to Storyboard > 645c5fc Add reno for new config [disk_utils]partprobe_attempts > 8aa46de Implement a function to check the image status > 83c4ec9 Fix callback plugin for Ansible 2.5 compatability > 7ba42e0 Follow the new PTI for document build > 3e92382 Clarify deprecation of "async" parameter > 34277f6 Fix incompatible requirement in lower-constraints > 1ffa757 Reference architecture: small cloud with trusted tenants > 9ea09fc Update and replace http with https for doc links > f5605d1 Assume node traits in instance trait validation > 0f441ab Adding grub2 bootloader support to devstack plugin > 739fa6c Describe unmasking fields in security document > 37b85b6 Copy port[group] VIF info from extra to internal_info > aafa9ac DevStack: Enroll node with iRMC hardware > 57bca71 Stop overriding tempdir in unit test > 548a263 Uniformly capitalize parameter description > 5f03daf Gate: run ironic tests in the regular multinode job > 0267c27 Do not use async parameter > 5bbeb8b Remove the link to the old drivers wiki page > 9143ec7 add lower-constraints job > 052782c Test driver-requirements changes on standalone job > 2051f14 Updated from global requirements > b8725e5 Exclude Ansible 2.5 from driver-reqs > dcb8e82 Fix typos There are two 'the', delete one of them. > 6843be2 fix typos in documentation > 4f08f72 Fix nits in the XClarity Driver codebase. > d1cd215 Validate instance_info.traits against node traits > b93e5b0 Prevent overwriting of last_error on cleaning failures > 7c3058a Infiniband Port Configuration update[1] > c9e079d Rework Bare Metal service overview in the install guide > 261df51 Gate: stop setting IRONIC_ENABLED_INSPECT_INTEFACES=inspector > 5f55422 Follow-up patch for rescue mode devstack change > ae25fc4 devstack: enabled fake-hardware and fake interfaces > 30a557f Updated from global requirements > 1d38ad8 Add descriptions for config option choices > f30b2eb devstack: add support for rescue mode > af02064 Updated from global requirements > f7da3f6 Implements validate_rescue() for IRMCVirtualMediaBoot > 6bb5bd7 Updated from global requirements > 013992b Update config option for collecting sensor data > cef19cb Use node traits during upgrade > c6694b7 multinode, multitenant grenade votes in gate > b6c521c zuul: Remove duplicated TEMPEST_PLUGIN entry > ac65ec6 Use more granular mocking in test_utils > 90b9133 change python-libguestfs to python-guestfs for ubuntu > 9f912c0 Update links in README > 0ce6bce Updated from global requirements > 07e2dbd Remove useless variable > dcebb77 Don't validate local_link_connection when port has client-id > cd3c011 Updated from global requirements > 48d04b3 Update docstring to agent client related codes > fabcf1a Move execution of 'tools/check-releasenotes.py' to pep8 > 3984620 reloads mutable config values on SIGHUP > 92f5dad Make grenade-mulinode voting again > 843c773 tox.ini: flake8: Remove I202 from ignore list > c6f8d85 fix a typo in driver-property-response.json: s/doman/domain/ > 46ee76a Trivial: Remove the non ascii codes in tox.ini > c8ae245 Register traits on nodes in devstack > 3edeb4c [devstack] block iPXE boot from HTTPS TempURLs > 1b8f69d Fix issue with double mocking of utils.execute functions > 216ad85 Updates boot mode on the baremetal as per `boot_mode` > c66679f Support nested objects and object lists in as_dict > 08ed859 Revert "Don't try to lock for vif detach" > 5694b98 Rework logic handling reserved orphaned nodes in the conductor > 8f2e487 Set 'initrd' to 'rescue_ramdisk' for rescue with iPXE > 253c377 Update iLO documentation for deprecating classical drivers > 8fdf752 Increase the instance_info column size to LONGTEXT on MySQL/MariaDB > 85581f3 Update release instructions wrt grenade > 80f0859 [ansible] use manual-mgmt hw type in unit tests > 6682a3d Use oslo_db.sqlalchemy.test_fixtures > 93f376f Disable .pyc files for grenade multinode > f8f8f85 Add docs for ansible deploy interface > 902fbbe Update comment and mock about autospec not working on staticmethods > 4df93fc Build instance PXE options for unrescue > 9bf5a28 Updated from global requirements > 8af9e0b Fix default object versioning for Rocky > 366a44a Allow sqalchemy filtering by id and uuid > 52dcc64 Fix rare HTTP 400 from port list API > 2921fe6 Clean nodes stuck in CLEANING state when ir-cond restarts > 55454a3 Imported Translations from Zanata > 708a698 tox: stop validating locale files > 152f45c Switch contributor documentation to hardware types > f9a88a3 Stop using --os-baremetal-api-version in devstack by default > 99a330a Conductor version cannot be null in Rocky > d81d2e7 Add 'Other considerations' to security doc > 1372216 Updated from global requirements > 628e71c Implements validate_rescue() for IloVirtualMediaBoot > 75d3692 Update to standalone ironic doc > 927c487 Remove too large configdrive for handling error > 9bc1106 Added known issue to iDRAC driver docs > ce5fd96 Add missing noop implementations to fake-hardware > 0642649 Stop running standalone tests for classic drivers > fddd58f Stop running non-voting jobs in gate > 6f43941 Add optional healthcheck middleware > 338c22b releasing docs: document stable jobs for the tempest plugin > e2e9b76 Add meaningful exception in Neutron port show > 5233ef0 Clean up CI playbooks > 2d0dab2 Fix broken log message. > 1c16205 Add validate_rescue() method to boot interface > f5faf9c Empty commit to bump minor pre-detected version > a0a4796 Remove test_contains_current_release_entry > 996d579 Fix grammar errors > 6b995c0 Clean up RPC versions and database migrations for Rocky > 9a2ebde Remove validate_boot_option_for_trusted_boot metric > bd1f109 Update reno for stable/queens > 5aa7a19 Fixed some typos in test code. > cb5f513 cleanup: Remove usage of some_dict.keys() > 21ef50e Do not send sensors data for nodes in maintenance mode > c934ae5 Remove the deprecated "giturl" option > e038b67 Add Error Codes > 0eb138c Support setting inbound global-request-id > > > Diffstat (except docs and test files) > ------------------------------------- > > CONTRIBUTING.rst | 4 +- > README.rst | 9 +- > api-ref/source/baremetal-api-v1-chassis.inc | 15 +- > .../source/baremetal-api-v1-node-management.inc | 16 +- > api-ref/source/baremetal-api-v1-nodes-vifs.inc | 4 +- > api-ref/source/baremetal-api-v1-nodes.inc | 56 +- > api-ref/source/baremetal-api-v1-portgroups.inc | 9 +- > api-ref/source/baremetal-api-v1-ports.inc | 19 +- > api-ref/source/baremetal-api-v1-volume.inc | 8 +- > api-ref/source/conf.py | 14 +- > api-ref/source/parameters.yaml | 243 +- > .../source/samples/driver-property-response.json | 2 +- > bindep.txt | 2 +- > devstack/files/apache-ironic-api.template | 49 - > devstack/lib/ironic | 442 ++-- > devstack/tools/ironic/scripts/configure-vm.py | 1 + > devstack/tools/ironic/scripts/create-node.sh | 5 +- > devstack/upgrade/from-queens/upgrade-ironic | 5 + > devstack/upgrade/settings | 2 + > .../contributor/ironic-multitenant-networking.rst | 16 +- > .../install/include/configure-ironic-api.inc | 2 +- > .../refarch/small-cloud-trusted-tenants.rst | 248 ++ > ironic/api/app.py | 9 + > ironic/api/config.py | 2 +- > ironic/api/controllers/base.py | 4 +- > ironic/api/controllers/v1/bios.py | 127 + > ironic/api/controllers/v1/chassis.py | 21 +- > ironic/api/controllers/v1/driver.py | 63 +- > ironic/api/controllers/v1/node.py | 258 +- > ironic/api/controllers/v1/port.py | 97 +- > ironic/api/controllers/v1/portgroup.py | 59 +- > ironic/api/controllers/v1/ramdisk.py | 5 +- > ironic/api/controllers/v1/types.py | 3 +- > ironic/api/controllers/v1/utils.py | 266 +- > ironic/api/controllers/v1/versions.py | 14 +- > ironic/api/hooks.py | 18 +- > ironic/api/middleware/auth_token.py | 2 +- > ironic/cmd/__init__.py | 4 +- > ironic/cmd/api.py | 1 + > ironic/cmd/conductor.py | 2 +- > ironic/cmd/dbsync.py | 13 +- > ironic/common/boot_modes.py | 29 + > ironic/common/cinder.py | 8 +- > ironic/common/context.py | 2 +- > ironic/common/driver_factory.py | 321 +-- > ironic/common/exception.py | 57 +- > ironic/common/faults.py | 27 + > ironic/common/fsm.py | 8 +- > ironic/common/glance_service/base_image_service.py | 19 +- > ironic/common/glance_service/service_utils.py | 27 +- > ironic/common/glance_service/v2/image_service.py | 18 +- > ironic/common/hash_ring.py | 3 +- > ironic/common/image_service.py | 4 +- > ironic/common/images.py | 8 +- > ironic/common/network.py | 37 + > ironic/common/neutron.py | 46 +- > ironic/common/policy.py | 7 + > ironic/common/pxe_utils.py | 13 +- > ironic/common/release_mappings.py | 54 +- > ironic/common/service.py | 2 +- > ironic/common/states.py | 36 +- > ironic/common/utils.py | 55 +- > ironic/common/wsgi_service.py | 4 +- > ironic/conductor/base_manager.py | 207 +- > ironic/conductor/manager.py | 647 +++-- > ironic/conductor/rpcapi.py | 4 + > ironic/conductor/task_manager.py | 18 +- > ironic/conductor/utils.py | 176 +- > ironic/conf/__init__.py | 4 +- > ironic/conf/agent.py | 9 +- > ironic/conf/ansible.py | 12 +- > ironic/conf/conductor.py | 11 +- > ironic/conf/default.py | 45 +- > ironic/conf/deploy.py | 22 +- > ironic/conf/glance.py | 4 +- > ironic/conf/healthcheck.py | 29 + > ironic/conf/ilo.py | 5 +- > ironic/conf/inspector.py | 6 +- > ironic/conf/irmc.py | 18 +- > ironic/conf/keystone.py | 33 - > ironic/conf/neutron.py | 4 +- > ironic/conf/opts.py | 2 +- > ironic/conf/pxe.py | 3 +- > ironic/conf/xclarity.py | 19 +- > ironic/db/api.py | 153 +- > ...0b163d4481e_add_port_portgroup_internal_info.py | 6 +- > .../1a59178ebdf6_add_volume_targets_table.py | 6 +- > ...51876d68_add_storage_interface_db_field_and_.py | 6 +- > .../1e1d5ace7dc6_add_inspection_started_at_and_.py | 6 +- > .../21b331f883ef_add_provision_updated_at.py | 6 +- > ...cfae_add_conductor_hardware_interfaces_table.py | 6 +- > .../242cc6a923b3_add_node_maintenance_reason.py | 6 +- > .../versions/2581ebaf0cb2_initial_migration.py | 6 +- > .../2d13bc3d6bba_add_bios_config_and_interface.py | 31 + > .../2fb93ffd2af1_increase_node_name_length.py | 9 +- > .../31baaf680d2b_add_node_instance_info.py | 6 +- > .../versions/3ae36a5f5131_add_logical_name.py | 6 +- > ...25597_add_unique_constraint_to_instance_uuid.py | 4 +- > .../3cb628139ea4_nodes_add_console_enabled.py | 6 +- > .../3d86a077a3f2_add_port_physical_network.py | 6 +- > .../405cfe08f18d_add_rescue_interface_to_node.py | 6 +- > ...7deb87cc9d_add_conductor_affinity_and_online.py | 6 +- > .../alembic/versions/48d6c242bb9b_add_node_tags.py | 6 +- > ...d8f27f235_add_portgroup_configuration_fields.py | 8 +- > .../versions/4f399b21ae71_add_node_clean_step.py | 6 +- > .../516faf1bb9b1_resizing_column_nodes_driver.py | 6 +- > .../5674c57409b9_replace_nostate_with_available.py | 8 +- > ...10e_added_port_group_table_and_altered_ports.py | 6 +- > .../60cf717201bc_add_standalone_ports_supported.py | 6 +- > .../versions/789acc877671_add_raid_config.py | 6 +- > .../versions/82c315d60161_add_bios_settings.py | 42 + > ...868cb606a74a_add_version_field_in_base_class.py | 6 +- > .../b4130a7fc904_create_nodetraits_table.py | 6 +- > .../bb59b63f55a_add_node_driver_internal_info.py | 6 +- > .../bcdd431ba0bf_add_fields_for_all_interfaces.py | 6 +- > ...c14cef6dfedf_populate_node_network_interface.py | 14 +- > .../daa1ba02d98_add_volume_connectors_table.py | 6 +- > .../dbefd6bdaa2c_add_default_column_to_.py | 6 +- > .../dd34e1f1303b_add_resource_class_to_node.py | 6 +- > .../e294876e8028_add_node_network_interface.py | 6 +- > ...18ff30eb42_resize_column_nodes_instance_info.py | 32 + > .../versions/f6fdb920c182_set_pxe_enabled_true.py | 8 +- > .../fb3f10dd262e_add_fault_to_node_table.py | 31 + > ironic/db/sqlalchemy/api.py | 336 +-- > ironic/db/sqlalchemy/models.py | 32 +- > ironic/dhcp/neutron.py | 16 +- > ironic/drivers/agent.py | 114 - > ironic/drivers/base.py | 467 ++-- > ironic/drivers/drac.py | 60 - > ironic/drivers/fake.py | 364 --- > ironic/drivers/fake_hardware.py | 18 +- > ironic/drivers/generic.py | 4 +- > ironic/drivers/hardware_type.py | 4 + > ironic/drivers/ilo.py | 79 - > ironic/drivers/ipmi.py | 135 +- > ironic/drivers/irmc.py | 74 +- > ironic/drivers/modules/agent.py | 140 +- > ironic/drivers/modules/agent_base_vendor.py | 94 +- > ironic/drivers/modules/agent_client.py | 156 +- > ironic/drivers/modules/ansible/deploy.py | 35 +- > .../playbooks/callback_plugins/ironic_log.py | 10 +- > .../modules/ansible/playbooks/library/facts_wwn.py | 2 +- > .../ansible/playbooks/library/root_hints.py | 2 +- > .../ansible/playbooks/library/stream_url.py | 2 +- > ironic/drivers/modules/boot_mode_utils.py | 268 +++ > ironic/drivers/modules/console_utils.py | 8 +- > ironic/drivers/modules/deploy_utils.py | 149 +- > ironic/drivers/modules/drac/management.py | 2 +- > ironic/drivers/modules/drac/raid.py | 38 +- > ironic/drivers/modules/drac/vendor_passthru.py | 10 +- > ironic/drivers/modules/fake.py | 66 +- > ironic/drivers/modules/ilo/boot.py | 98 +- > ironic/drivers/modules/ilo/common.py | 3 +- > ironic/drivers/modules/ilo/firmware_processor.py | 2 +- > ironic/drivers/modules/ilo/management.py | 8 +- > ironic/drivers/modules/ilo/vendor.py | 4 +- > ironic/drivers/modules/image_cache.py | 8 +- > ironic/drivers/modules/inspector.py | 40 +- > ironic/drivers/modules/ipmitool.py | 38 +- > ironic/drivers/modules/ipxe_config.template | 15 +- > ironic/drivers/modules/irmc/boot.py | 154 +- > ironic/drivers/modules/irmc/common.py | 6 +- > ironic/drivers/modules/irmc/power.py | 10 +- > ironic/drivers/modules/iscsi_deploy.py | 8 +- > ironic/drivers/modules/network/common.py | 59 +- > ironic/drivers/modules/network/flat.py | 4 +- > ironic/drivers/modules/network/neutron.py | 15 +- > ironic/drivers/modules/noop.py | 13 + > ironic/drivers/modules/oneview/common.py | 5 +- > ironic/drivers/modules/oneview/deploy.py | 24 +- > ironic/drivers/modules/oneview/deploy_utils.py | 6 +- > ironic/drivers/modules/oneview/inspect.py | 35 +- > ironic/drivers/modules/oneview/management.py | 9 +- > ironic/drivers/modules/oneview/power.py | 13 +- > ironic/drivers/modules/pxe.py | 98 +- > ironic/drivers/modules/pxe_config.template | 2 +- > ironic/drivers/modules/redfish/management.py | 87 +- > ironic/drivers/modules/snmp.py | 363 ++- > ironic/drivers/modules/storage/cinder.py | 13 +- > ironic/drivers/modules/storage/external.py | 67 + > ironic/drivers/modules/xclarity/common.py | 110 +- > ironic/drivers/modules/xclarity/management.py | 39 +- > ironic/drivers/modules/xclarity/power.py | 37 +- > ironic/drivers/oneview.py | 104 +- > ironic/drivers/pxe.py | 231 -- > ironic/drivers/raid_config_schema.json | 3 +- > ironic/locale/ja/LC_MESSAGES/ironic.po | 1690 ------------- > ironic/objects/__init__.py | 1 + > ironic/objects/base.py | 45 +- > ironic/objects/bios.py | 256 ++ > ironic/objects/fields.py | 4 +- > ironic/objects/node.py | 67 +- > ironic/objects/notification.py | 4 +- > ironic/objects/port.py | 51 +- > ironic/objects/portgroup.py | 76 +- > ironic/objects/trait.py | 2 +- > .../unit/api/controllers/v1/test_portgroup.py | 258 +- > .../api/controllers/v1/test_volume_connector.py | 8 +- > .../unit/api/controllers/v1/test_volume_target.py | 8 +- > .../drivers/ipxe_config_boot_from_volume.template | 33 - > ...e_config_boot_from_volume_extra_volume.template | 37 + > ...nfig_boot_from_volume_no_extra_volumes.template | 34 + > ...pxe_config_boot_from_volume_no_volumes.template | 32 - > .../unit/drivers/ipxe_config_timeout.template | 2 +- > .../unit/drivers/modules/ansible/test_deploy.py | 55 +- > .../unit/drivers/modules/drac/test_inspect.py | 10 +- > .../unit/drivers/modules/drac/test_management.py | 15 +- > .../drivers/modules/drac/test_periodic_task.py | 62 +- > .../drivers/modules/ilo/test_firmware_processor.py | 10 +- > .../unit/drivers/modules/ilo/test_management.py | 14 +- > .../unit/drivers/modules/irmc/test_inspect.py | 27 +- > .../unit/drivers/modules/irmc/test_management.py | 55 +- > .../unit/drivers/modules/network/test_common.py | 119 +- > .../unit/drivers/modules/network/test_flat.py | 3 - > .../unit/drivers/modules/network/test_neutron.py | 62 +- > .../unit/drivers/modules/network/test_noop.py | 3 - > .../unit/drivers/modules/oneview/test_common.py | 26 +- > .../unit/drivers/modules/oneview/test_deploy.py | 63 +- > .../drivers/modules/oneview/test_deploy_utils.py | 18 +- > .../unit/drivers/modules/oneview/test_inspect.py | 53 +- > .../drivers/modules/oneview/test_management.py | 37 +- > .../unit/drivers/modules/oneview/test_power.py | 60 +- > .../drivers/modules/redfish/test_management.py | 66 +- > .../unit/drivers/modules/redfish/test_power.py | 3 - > .../unit/drivers/modules/storage/test_cinder.py | 71 +- > .../unit/drivers/modules/storage/test_external.py | 68 + > .../unit/drivers/modules/test_agent_base_vendor.py | 51 +- > .../unit/drivers/modules/test_console_utils.py | 3 +- > .../unit/drivers/modules/test_deploy_utils.py | 109 +- > .../unit/drivers/modules/test_iscsi_deploy.py | 31 +- > .../unit/drivers/modules/ucs/test_management.py | 13 +- > .../unit/drivers/modules/xclarity/test_common.py | 91 +- > .../drivers/modules/xclarity/test_management.py | 23 +- > .../unit/drivers/modules/xclarity/test_power.py | 32 +- > .../unit/drivers/third_party_driver_mock_specs.py | 2 + > lower-constraints.txt | 165 ++ > .../run.yaml | 172 +- > playbooks/legacy/grenade-dsvm-ironic/run.yaml | 89 +- > .../legacy/ironic-dsvm-base-multinode/pre.yaml | 22 + > playbooks/legacy/ironic-dsvm-base/pre.yaml | 22 + > playbooks/legacy/ironic-dsvm-functional/run.yaml | 21 - > playbooks/legacy/ironic-dsvm-standalone/run.yaml | 25 - > playbooks/legacy/tempest-dsvm-ironic-bfv/run.yaml | 91 +- > .../run.yaml | 106 + > .../run.yaml | 81 + > .../legacy/tempest-dsvm-ironic-inspector/run.yaml | 115 +- > .../run.yaml | 74 +- > .../run.yaml | 75 +- > .../run.yaml | 74 +- > .../run.yaml | 149 -- > .../run.yaml | 119 +- > .../run.yaml | 74 +- > .../run.yaml | 74 +- > .../legacy/tempest-dsvm-ironic-parallel/run.yaml | 23 +- > .../tempest-dsvm-ironic-pxe_ipa-full/run.yaml | 72 +- > .../run.yaml | 89 +- > ...dd-healthcheck-middleware-86120fa07a7c8151.yaml | 10 + > ...add-id-and-uuid-filtering-to-sqalchemy-api.yaml | 5 + > .../add-inspect-wait-state-948f83dfe342897b.yaml | 22 + > .../add-inspection-abort-a187e6e5c1f6311d.yaml | 9 + > ...retryable-ipmitool-errors-1c9351a89ff0ec1a.yaml | 9 + > .../notes/add-node-bios-9c1c3d442e8acdac.yaml | 6 + > ...dd-node-boot-mode-control-9761d4bcbd8c3a0d.yaml | 16 + > ...redfish-boot-mode-support-2f1a2568e71c65d0.yaml | 4 + > ...driver-type-baytech-mrp27-5007d1d7e0a52162.yaml | 5 + > ...pdu-driver-type-discovery-1f280b7f06fd1ca5.yaml | 7 + > ...-snmpv3-security-features-bbefb8b844813a53.yaml | 22 + > .../notes/add-tooz-dep-85c56c74733a222d.yaml | 2 +- > ...-rescue-to-boot-interface-bd74aff9e250334b.yaml | 6 + > ...add_detail_true_api_query-cb6944847830cd1a.yaml | 11 + > ...xternal-storage-interface-9b7c0a0a2afd3176.yaml | 13 + > .../adds-secure-erase-switch-23f449c86b3648a4.yaml | 7 + > .../notes/async-deprecate-b3d81d7968ea47e5.yaml | 9 + > .../notes/bug-1596421-0cb8f59073f56240.yaml | 9 + > .../notes/bug-1749433-363b747d2db67df6.yaml | 6 + > .../notes/bug-1749860-457292cf62e18a0e.yaml | 6 + > .../notes/bug-2001832-62e244dc48c1f79e.yaml | 7 + > .../notes/bug-2002062-959b865ced05b746.yaml | 7 + > .../notes/bug-2002093-9fcb3613d2daeced.yaml | 9 + > ...ck-in-cleaning-on-startup-443823ea4f937965.yaml | 5 + > ...precate-inspector-enabled-901fd9c9426046c7.yaml | 7 + > ...deprecate-oneview-drivers-5a487e1940bcbbc6.yaml | 12 + > ...deprecate-xclarity-config-af9b753f96779f42.yaml | 19 + > ...n-when-port-has-client-id-8e584586dc4fca50.yaml | 7 + > ...10-greater-than-16-drives-a4cb107e34371a51.yaml | 6 + > releasenotes/notes/fake-noop-bebc43983eb801d1.yaml | 6 + > .../fix-cleaning-with-traits-3a54faa70d594fd0.yaml | 7 + > ...ix-multi-attached-volumes-092ffedbdcf0feac.yaml | 6 + > ...tance-for-agent-interface-56753bdf04dd581f.yaml | 20 + > ...ploy_validation_resp_code-ed93627d1b0dfa94.yaml | 7 + > .../notes/heartbeat-locked-6e53b68337d5a258.yaml | 6 + > .../hw-ifaces-periodics-af8c9b93ecca9fcd.yaml | 6 + > .../inspector-periodics-34449c9d77830b3c.yaml | 8 + > ...-command-line-ip-argument-4e92cf8bb912f62d.yaml | 8 + > ...mc-support-ipmitool-power-a3480a70753948e5.yaml | 4 + > .../notes/ironic-11-prelude-6dae469633823f8d.yaml | 14 + > .../migrate_vif_port_id-5e1496638240933d.yaml | 13 + > .../notes/no-classic-drivers-e68d8527491314c3.yaml | 12 + > .../notes/no-classic-idrac-4fbf1ba66c35fb4a.yaml | 6 + > .../notes/no-classic-ilo-7822af6821d2f1cc.yaml | 5 + > .../notes/no-classic-ipmi-7ec52a7b01e40536.yaml | 5 + > .../notes/no-classic-irmc-3a606045e87119b7.yaml | 5 + > .../notes/no-classic-oneview-e46ee2838d2b1d37.yaml | 6 + > .../notes/no-classic-snmp-b77d267b535da216.yaml | 5 + > .../no-classic-ucs-cimc-7c62bb189ffbe0dd.yaml | 8 + > releasenotes/notes/no-fake-308b50d4ab83ca7a.yaml | 23 + > .../no-last-error-overwrite-b90aac3303eb992e.yaml | 4 + > ...no-sensors-in-maintenance-7a0ecf418336d105.yaml | 5 + > .../notes/node-fault-8c59c0ecb94ba562.yaml | 19 + > .../notes/orphan-nodes-389cb6d90c2917ec.yaml | 10 + > .../notes/partprobe-retries-e69e9d20f3a3c2d3.yaml | 14 + > .../port-list-bad-request-078512862c22118e.yaml | 6 + > .../power-fault-recovery-6e22f0114ceee203.yaml | 20 + > .../pxe-enabled-ports-check-c1736215dce76e97.yaml | 7 + > .../notes/raid-dell-boss-e9c5da9ddceedd67.yaml | 4 + > ...ble_deploy-driver-options-a28dc2f36110a67a.yaml | 8 + > ...ve-metric-pxe-boot-option-1aec41aebecc1ce9.yaml | 6 + > .../remove-vifs-on-teardown-707c8e40c46b6e64.yaml | 19 + > .../removed-keystone-section-1ec46442fb332c29.yaml | 12 + > ...ace-for-ilo-hardware-type-2392989d0fef8849.yaml | 7 + > ...ce-for-irmc-hardware-type-17e38197849748e0.yaml | 7 + > ...p-service-reloads-configs-0e2462e3f064a2ff.yaml | 17 + > ...onsole-during-unprovision-a29d8facb3f03be5.yaml | 7 + > ...3-if-no-conductors-online-ead1512628182ec4.yaml | 6 + > .../notes/story-2002637-4825d60b096e475b.yaml | 7 + > ...rio-for-ilo-hardware-type-ebca86da8fc271f6.yaml | 8 + > ...node-driver_internal_info-5c11de8f2c2b2e87.yaml | 8 + > .../validate-instance-traits-525dd3150aa6afa2.yaml | 9 + > ...detach-locking-fix-revert-3961d47fe419460a.yaml | 6 + > .../notes/xclarity-driver-622800d17459e3f9.yaml | 4 +- > .../xclarity-mask-password-9fe7605ece7689c3.yaml | 5 + > releasenotes/source/index.rst | 1 + > .../locale/en_GB/LC_MESSAGES/releasenotes.po | 419 ++++ > .../source/locale/ja/LC_MESSAGES/releasenotes.po | 43 + > releasenotes/source/queens.rst | 7 + > requirements.txt | 12 +- > setup.cfg | 52 +- > test-requirements.txt | 13 +- > tools/check-releasenotes.py | 2 + > tools/config/ironic-config-generator.conf | 1 + > tox.ini | 64 +- > zuul.d/legacy-ironic-jobs.yaml | 28 +- > zuul.d/project.yaml | 17 +- > 483 files changed, 17233 insertions(+), 13973 deletions(-) > > > Requirements updates > -------------------- > > diff --git a/requirements.txt b/requirements.txt > index 46533dd..f31ba2d 100644 > --- a/requirements.txt > +++ b/requirements.txt > @@ -8 +8 @@ automaton>=1.9.0 # Apache-2.0 > -eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT > +eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT > @@ -11 +11 @@ python-cinderclient>=3.3.0 # Apache-2.0 > -python-neutronclient>=6.3.0 # Apache-2.0 > +python-neutronclient>=6.7.0 # Apache-2.0 > @@ -13 +13 @@ python-glanceclient>=2.8.0 # Apache-2.0 > -keystoneauth1>=3.3.0 # Apache-2.0 > +keystoneauth1>=3.4.0 # Apache-2.0 > @@ -19,2 +19,2 @@ pysendfile>=2.0.0 # MIT > -oslo.concurrency>=3.25.0 # Apache-2.0 > -oslo.config>=5.1.0 # Apache-2.0 > +oslo.concurrency>=3.26.0 # Apache-2.0 > +oslo.config>=5.2.0 # Apache-2.0 > @@ -40 +40 @@ WSME>=0.8.0 # MIT > -Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause) > +Jinja2>=2.10 # BSD License (3 clause) > diff --git a/test-requirements.txt b/test-requirements.txt > index 88922ef..80ff780 100644 > --- a/test-requirements.txt > +++ b/test-requirements.txt > @@ -4 +4 @@ > -hacking>=1.0.0 # Apache-2.0 > +hacking>=1.0.0,<1.1.0 # Apache-2.0 > @@ -12,0 +13 @@ oslotest>=3.2.0 # Apache-2.0 > +stestr>=1.0.0 # Apache-2.0 > @@ -15 +15,0 @@ testtools>=2.2.0 # MIT > -os-testr>=1.0.0 # Apache-2.0 > @@ -21,8 +21 @@ flake8-import-order>=0.13 # LGPLv3 > - > -# Doc requirements > -sphinx!=1.6.6,>=1.6.2 # BSD > -sphinxcontrib-pecanwsme>=0.8.0 # Apache-2.0 > -sphinxcontrib-seqdiag>=0.8.4 # BSD > -openstackdocstheme>=1.18.1 # Apache-2.0 > -reno>=2.5.0 # Apache-2.0 > -os-api-ref>=1.4.0 # Apache-2.0 > +Pygments>=2.2.0 # BSD > From doug at doughellmann.com Thu Jul 5 13:25:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 09:25:54 -0400 Subject: [openstack-dev] [tc] Technical Committee Update for 3 July In-Reply-To: References: <1530641744-sup-28@lrrr.local> Message-ID: <1530796986-sup-1041@lrrr.local> Excerpts from Hongbin Lu's message of 2018-07-03 22:06:41 -0400: > > > > Discussions about affiliation diversity continue in two directions. > > Zane's proposal for requirements for new project teams has stalled a > > bit. The work Thierry and Mohammed have done on the diversity tags has > > brought a new statistics script and a proposal to drop the use of the > > tags in favor of folding the diversity information into the more general > > health checks we are doing. Thierry has updated the health tracker page > > > > Hi, > > If appropriate, I would rather to nominate myself as the liaison for the > Zun project. I am the first PTL of the project and familiar with the > current status. I should be more appropriate for doing the health > evaluation for this project. Please let me know if it is possible for me to > participant. > > Best regards, > Hongbin The point of the health check process is to have the TC actively reach out to each team to see how things are going and identify potential issues before they turn into full blown problems. So, while I'm sure Zane and Thierry would welcome your input, we want them to draw their own conclusions about the state of the project. Doug From tpb at dyncloud.net Thu Jul 5 14:17:34 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 5 Jul 2018 10:17:34 -0400 Subject: [openstack-dev] [manila] No meeting today July 5 Message-ID: <20180705141734.vtx2gxvrtc6h6ewf@barron.net> We have a fair number of team members taking a holiday today and no new agenda items were added this week so let's skip today's community meeting. Next manila community meeting will be July 12 at 1500 UTC. https://wiki.openstack.org/wiki/Manila/Meetings Let's keep up with the reviews on outstanding work that needs to complete by Milestone 3: https://etherpad.openstack.org/p/manila-rocky-review-focus Thanks! -- Tom Barron (tbarron) From nishant.e.kumar at ericsson.com Thu Jul 5 16:40:00 2018 From: nishant.e.kumar at ericsson.com (Nishant Kumar E) Date: Thu, 5 Jul 2018 16:40:00 +0000 Subject: [openstack-dev] [cinder][security][api-wg] Adding http security headers Message-ID: Hi, I have registered a blueprint for adding http security headers - https://blueprints.launchpad.net/cinder/+spec/http-security-headers Reason for introducing this change - I work for AT&T cloud project - Network Cloud (Earlier known as AT&T integrated Cloud). As part of working there we have introduced this change within all the services as kind of a downstream change but would like to see it a part of upstream community. While we did not face any major threats without this change but during our investigation process we found that if dealing with web services we should maximize the security as much as possible and came up with a list of HTTP security headers that we should include as part of the OpenStack services. I would like to introduce this change as part of cinder to start off and then propagate this to all the services. Some reference links which might give more insight into this: * https://www.owasp.org/index.php/OWASP_Secure_Headers_Project#tab=Headers * https://www.keycdn.com/blog/http-security-headers/ * https://securityintelligence.com/an-introduction-to-http-response-headers-for-security/ Please let me know if this looks good and whether it can be included as part of Cinder followed by other services. More details on how the implementation will be done is mentioned as part of the blueprint but any better ideas for implementation is welcomed too !! Thanks and Regards, Nishant Regards, Nishant -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Jul 5 16:46:54 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 5 Jul 2018 11:46:54 -0500 Subject: [openstack-dev] [oslo] Reminder about Oslo feature freeze Message-ID: Hi, This is just a reminder that Oslo observes feature freeze earlier than other projects so those projects have time to implement any new features from Oslo. Per the policy[1] we freeze one week before the non-client library feature freeze, which is coming in two weeks. Therefore, we have about one week to land new features in Oslo. Anything that misses the deadline will most likely need to wait until Stein. Feel free to contact the Oslo team with any comments or questions. Thanks. -Ben 1: http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html From jim at jimrollenhagen.com Thu Jul 5 16:53:34 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 5 Jul 2018 12:53:34 -0400 Subject: [openstack-dev] [cinder][security][api-wg] Adding http security headers In-Reply-To: References: Message-ID: On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E < nishant.e.kumar at ericsson.com> wrote: > Hi, > > > > I have registered a blueprint for adding http security headers - > https://blueprints.launchpad.net/cinder/+spec/http-security-headers > > > > Reason for introducing this change - I work for AT&T cloud project – > Network Cloud (Earlier known as AT&T integrated Cloud). As part of working > there we have introduced this change within all the services as kind of a > downstream change but would like to see it a part of upstream community. > While we did not face any major threats without this change but during our > investigation process we found that if dealing with web services we should > maximize the security as much as possible and came up with a list of HTTP > security headers that we should include as part of the OpenStack services. > I would like to introduce this change as part of cinder to start off and > then propagate this to all the services. > > > > Some reference links which might give more insight into this: > > - https://www.owasp.org/index.php/OWASP_Secure_Headers_ > Project#tab=Headers > - https://www.keycdn.com/blog/http-security-headers/ > - https://securityintelligence.com/an-introduction-to-http- > response-headers-for-security/ > > Please let me know if this looks good and whether it can be included as > part of Cinder followed by other services. More details on how the > implementation will be done is mentioned as part of the blueprint but any > better ideas for implementation is welcomed too !! > Wouldn't this be a job for the HTTP server in front of cinder (or whatever service)? Especially "Strict-Transport-Security" as one shouldn't be enabling that without ensuring a correct TLS config. Bonus points in that upstream wouldn't need any changes, and we won't need to change every project. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Jul 5 17:05:39 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 5 Jul 2018 18:05:39 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, At today's meeting we discussed an issue that came up on a nova/placement review [9] wherein there was some indecision about whether a response code of 400 or 404 is more appropriate when a path segement expects a UUID, the request doesn't supply something that is actually a UUID, and the method being used on the URI may be creating a resource. We agreed with the earlier discussion that a 400 was approrpiate in this narrow case. Other cases may be different. With that warm up exercise out of the way, we moved on to discussing pending guidelines, freezing one of them [10] and declaring that another [11] required a followup to clarify the format of strings codes used in error responses. After that, we did some group learning about StoryBoard [8]. This is becoming something of a regular activity. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ # Guidelines Currently Under Review [3] * Add links to errors-example.json https://review.openstack.org/#/c/578369/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html [8] https://storyboard.openstack.org/#!/project/1039 [9] https://review.openstack.org/#/c/580373/ [10] https://review.openstack.org/#/c/577118/ [11] https://review.openstack.org/#/c/578369/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Thu Jul 5 17:17:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 13:17:58 -0400 Subject: [openstack-dev] [cinder][security][api-wg] Adding http security headers In-Reply-To: References: Message-ID: <1530811036-sup-6615@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400: > On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E < > nishant.e.kumar at ericsson.com> wrote: > > > Hi, > > > > > > > > I have registered a blueprint for adding http security headers - > > https://blueprints.launchpad.net/cinder/+spec/http-security-headers > > > > > > > > Reason for introducing this change - I work for AT&T cloud project – > > Network Cloud (Earlier known as AT&T integrated Cloud). As part of working > > there we have introduced this change within all the services as kind of a > > downstream change but would like to see it a part of upstream community. > > While we did not face any major threats without this change but during our > > investigation process we found that if dealing with web services we should > > maximize the security as much as possible and came up with a list of HTTP > > security headers that we should include as part of the OpenStack services. > > I would like to introduce this change as part of cinder to start off and > > then propagate this to all the services. > > > > > > > > Some reference links which might give more insight into this: > > > > - https://www.owasp.org/index.php/OWASP_Secure_Headers_ > > Project#tab=Headers > > - https://www.keycdn.com/blog/http-security-headers/ > > - https://securityintelligence.com/an-introduction-to-http- > > response-headers-for-security/ > > > > Please let me know if this looks good and whether it can be included as > > part of Cinder followed by other services. More details on how the > > implementation will be done is mentioned as part of the blueprint but any > > better ideas for implementation is welcomed too !! > > > > Wouldn't this be a job for the HTTP server in front of cinder (or whatever > service)? Especially "Strict-Transport-Security" as one shouldn't be > enabling that without ensuring a correct TLS config. > > Bonus points in that upstream wouldn't need any changes, and we won't need > to change every project. :) > > // jim Yes, this feels very much like something the deployment tools should do when they set up Apache or uWSGI or whatever service is in front of each API WSGI service. Doug From Kevin.Fox at pnnl.gov Thu Jul 5 17:30:23 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 5 Jul 2018 17:30:23 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov>, <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> We're pretty far into a tangent... /me shrugs. I've done it. It can work. Some things your right. deploying k8s is more work then deploying ansible. But what I said depends on context. If your goal is to deploy k8s/manage k8s then having to learn how to use k8s is not a big ask. adding a different tool such as ansible is an extra cognitive dependency. Deploying k8s doesn't need a general solution to deploying generic base OS's. Just enough OS to deploy K8s and then deploy everything on top in containers. Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a solution here to provide generic provisioning to every use case in the datacenter. But enough to get a k8s based cluster up and self hosted enough where you could launch other provisioning/management tools in that same cluster, if you need that. It provides a solid base for the datacenter on which you can easily add the services you need for dealing with everything. All of the microservices I mentioned can be wrapped up in a single helm chart and deployed with a single helm install command. I don't have permission to release anything at the moment, so I can't prove anything right now. So, take my advice with a grain of salt. :) Switching gears, you said why would users use lfs when they can use a distro, so why use openstack without a distro. I'd say, today unless you are paying a lot, there isn't really an equivalent distro that isn't almost as much effort as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though I think TripleO has been making progress on this front... Anyway. This thread is I think 2 tangents away from the original topic now. If folks are interested in continuing this discussion, lets open a new thread. Thanks, Kevin ________________________________________ From: Dmitry Tantsur [dtantsur at redhat.com] Sent: Wednesday, July 04, 2018 4:24 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 Tried hard to avoid this thread, but this message is so much wrong.. On 07/03/2018 09:48 PM, Fox, Kevin M wrote: > I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly hard. In fact, it is easier then you might think. k8s is a platform for deploying/managing services. Guess what you need to provision bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe infrastructure. pixiecore with a simple http backend works pretty well in practice. a service to provide installation instructions. nginx server handing out kickstart files for example. and a place to fetch rpms from in case you don't have internet access or want to ensure uniformity. nginx server with a mirror yum repo. Its even possible to seed it on minikube and sluff it off to its own cluster. > > The main hard part about it is currently no one is shipping a reference implementation of the above. That may change... > > It is certainly much much easier then deploying enough OpenStack to get a self hosting ironic working. Side note: no, it's not. What you describe is similarly hard to installing standalone ironic from scratch and much harder than using bifrost for everything. Especially when you try to do it in production. Especially with unusual operating requirements ("no TFTP servers on my network"). Also, sorry, I cannot resist: "Guess what you need to orchestrate containers? Just a few things. A container runtime. Docker works well. some remove execution tooling. ansible works pretty well in practice. It is certainly much much easier then deploying enough k8s to get a self hosting containers orchestration working." Such oversimplications won't bring us anywhere. Sometimes things are hard because they ARE hard. Where are people complaining that installing a full GNU/Linux distributions from upstream tarballs is hard? How many operators here use LFS as their distro? If we are okay with using a distro for GNU/Linux, why using a distro for OpenStack causes so much contention? > > Thanks, > Kevin > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Tuesday, July 03, 2018 10:06 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 07/02/2018 03:31 PM, Zane Bitter wrote: >> On 28/06/18 15:09, Fox, Kevin M wrote: >>> * made the barrier to testing/development as low as 'curl >>> http://......minikube; minikube start' (this spurs adoption and >>> contribution) >> >> That's not so different from devstack though. >> >>> * not having large silo's in deployment projects allowed better >>> communication on common tooling. >>> * Operator focused architecture, not project based architecture. >>> This simplifies the deployment situation greatly. >>> * try whenever possible to focus on just the commons and push vendor >>> specific needs to plugins so vendors can deal with vendor issues >>> directly and not corrupt the core. >> >> I agree with all of those, but to be fair to OpenStack, you're leaving >> out arguably the most important one: >> >> * Installation instructions start with "assume a working datacenter" >> >> They have that luxury; we do not. (To be clear, they are 100% right to >> take full advantage of that luxury. Although if there are still folks >> who go around saying that it's a trivial problem and OpenStackers must >> all be idiots for making it look so difficult, they should really stop >> embarrassing themselves.) > > This. > > There is nothing trivial about the creation of a working datacenter -- > never mind a *well-running* datacenter. Comparing Kubernetes to > OpenStack -- particular OpenStack's lower levels -- is missing this > fundamental point and ends up comparing apples to oranges. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Thu Jul 5 17:47:25 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Jul 2018 17:47:25 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> Message-ID: <20180705174725.hq4czbw2ldzdh2dn@yuggoth.org> On 2018-07-05 17:30:23 +0000 (+0000), Fox, Kevin M wrote: [...] > Deploying k8s doesn't need a general solution to deploying generic > base OS's. Just enough OS to deploy K8s and then deploy everything > on top in containers. Deploying a seed k8s with minikube is pretty > trivial. I'm not suggesting a solution here to provide generic > provisioning to every use case in the datacenter. But enough to > get a k8s based cluster up and self hosted enough where you could > launch other provisioning/management tools in that same cluster, > if you need that. It provides a solid base for the datacenter on > which you can easily add the services you need for dealing with > everything. > > All of the microservices I mentioned can be wrapped up in a single > helm chart and deployed with a single helm install command. > > I don't have permission to release anything at the moment, so I > can't prove anything right now. So, take my advice with a grain of > salt. :) [...] Anything like http://www.airshipit.org/ ? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dprince at redhat.com Thu Jul 5 17:50:08 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 05 Jul 2018 13:50:08 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured Message-ID: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Last week I was tinkering with my docker configuration a bit and was a bit surprised that puppet/services/docker.yaml no longer used puppet to configure the docker daemon. It now uses Ansible [1] which is very cool but brings up the question of how should we clearly indicate to developers and users that we are using Ansible vs Puppet for configuration? TripleO has been around for a while now, has supported multiple configuration ans service types over the years: os-apply-config, puppet, containers, and now Ansible. In the past we've used rigid directory structures to identify which "service type" was used. More recently we mixed things up a bit more even by extending one service type from another ("docker" services all initially extended the "puppet" services to generate config files and provide an easy upgrade path). Similarly we now use Ansible all over the place for other things in many of or docker and puppet services for things like upgrades. That is all good too. I guess the thing I'm getting at here is just a way to cleanly identify which services are configured via Puppet vs. Ansible. And how can we do that in the least destructive way possible so as not to confuse ourselves and our users in the process. Also, I think its work keeping in mind that TripleO was once a multi- vendor project with vendors that had different preferences on service configuration. Also having the ability to support multiple configuration mechanisms in the future could once again present itself (thinking of Kubernetes as an example). Keeping in mind there may be a conversion period that could well last more than a release or two. I suggested a 'services/ansible' directory with mixed responces in our #tripleo meeting this week. Any other thoughts on the matter? Thanks, Dan [1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm it/puppet/services/docker.yaml?id=00f5019ef28771e0b3544d0aa3110d5603d7c 159 From james.slagle at gmail.com Thu Jul 5 18:13:17 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 5 Jul 2018 14:13:17 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince wrote: > Last week I was tinkering with my docker configuration a bit and was a > bit surprised that puppet/services/docker.yaml no longer used puppet to > configure the docker daemon. It now uses Ansible [1] which is very cool > but brings up the question of how should we clearly indicate to > developers and users that we are using Ansible vs Puppet for > configuration? > > TripleO has been around for a while now, has supported multiple > configuration ans service types over the years: os-apply-config, > puppet, containers, and now Ansible. In the past we've used rigid > directory structures to identify which "service type" was used. More > recently we mixed things up a bit more even by extending one service > type from another ("docker" services all initially extended the > "puppet" services to generate config files and provide an easy upgrade > path). > > Similarly we now use Ansible all over the place for other things in > many of or docker and puppet services for things like upgrades. That is > all good too. I guess the thing I'm getting at here is just a way to > cleanly identify which services are configured via Puppet vs. Ansible. > And how can we do that in the least destructive way possible so as not > to confuse ourselves and our users in the process. > > Also, I think its work keeping in mind that TripleO was once a multi- > vendor project with vendors that had different preferences on service > configuration. Also having the ability to support multiple > configuration mechanisms in the future could once again present itself > (thinking of Kubernetes as an example). Keeping in mind there may be a > conversion period that could well last more than a release or two. > > I suggested a 'services/ansible' directory with mixed responces in our > #tripleo meeting this week. Any other thoughts on the matter? I would almost rather see us organize the directories by service name/project instead of implementation. Instead of: puppet/services/nova-api.yaml puppet/services/nova-conductor.yaml docker/services/nova-api.yaml docker/services/nova-conductor.yaml We'd have: services/nova/nova-api-puppet.yaml services/nova/nova-conductor-puppet.yaml services/nova/nova-api-docker.yaml services/nova/nova-conductor-docker.yaml (or perhaps even another level of directories to indicate puppet/docker/ansible?) Personally, such an organization is something I'm more used to. It feels more similar to how most would expect a puppet module or ansible role to be organized, where you have the abstraction (service configuration) at a higher directory level than specific implementations. It would also lend itself more easily to adding implementations only for specific services, and address the question of if a new top level implementation directory needs to be created. For example, adding a services/nova/nova-api-chef.yaml seems a lot less contentious than adding a top level chef/services/nova-api.yaml. It'd also be nice if we had a way to mark the default within a given service's directory. Perhaps services/nova/nova-api-default.yaml, which would be a new template that just consumes the default? Or perhaps a symlink, although it was pointed out symlinks don't work in swift containers. Still, that could possibly be addressed in our plan upload workflows. Then the resource-registry would point at nova-api-default.yaml. One could easily tell which is the default without having to cross reference with the resource-registry. -- -- James Slagle -- From dtantsur at redhat.com Thu Jul 5 18:17:49 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 5 Jul 2018 20:17:49 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> Message-ID: On Thu, Jul 5, 2018, 19:31 Fox, Kevin M wrote: > We're pretty far into a tangent... > > /me shrugs. I've done it. It can work. > > Some things your right. deploying k8s is more work then deploying ansible. > But what I said depends on context. If your goal is to deploy k8s/manage > k8s then having to learn how to use k8s is not a big ask. adding a > different tool such as ansible is an extra cognitive dependency. Deploying > k8s doesn't need a general solution to deploying generic base OS's. Just > enough OS to deploy K8s and then deploy everything on top in containers. > Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a > solution here to provide generic provisioning to every use case in the > datacenter. But enough to get a k8s based cluster up and self hosted enough > where you could launch other provisioning/management tools in that same > cluster, if you need that. It provides a solid base for the datacenter on > which you can easily add the services you need for dealing with everything. > > All of the microservices I mentioned can be wrapped up in a single helm > chart and deployed with a single helm install command. > > I don't have permission to release anything at the moment, so I can't > prove anything right now. So, take my advice with a grain of salt. :) > > Switching gears, you said why would users use lfs when they can use a > distro, so why use openstack without a distro. I'd say, today unless you > are paying a lot, there isn't really an equivalent distro that isn't almost > as much effort as lfs when you consider day2 ops. To compare with Redhat > again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no > equivalent of CentOS. Though I think TripleO has been making progress on > this front... > It's RDO what you're looking for (equivalent of centos). TripleO is an installer project, not a distribution. > Anyway. This thread is I think 2 tangents away from the original topic > now. If folks are interested in continuing this discussion, lets open a new > thread. > > Thanks, > Kevin > > ________________________________________ > From: Dmitry Tantsur [dtantsur at redhat.com] > Sent: Wednesday, July 04, 2018 4:24 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > Tried hard to avoid this thread, but this message is so much wrong.. > > On 07/03/2018 09:48 PM, Fox, Kevin M wrote: > > I don't dispute trivial, but a self hosting k8s on bare metal is not > incredibly hard. In fact, it is easier then you might think. k8s is a > platform for deploying/managing services. Guess what you need to provision > bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset > works well. some pxe infrastructure. pixiecore with a simple http backend > works pretty well in practice. a service to provide installation > instructions. nginx server handing out kickstart files for example. and a > place to fetch rpms from in case you don't have internet access or want to > ensure uniformity. nginx server with a mirror yum repo. Its even possible > to seed it on minikube and sluff it off to its own cluster. > > > > The main hard part about it is currently no one is shipping a reference > implementation of the above. That may change... > > > > It is certainly much much easier then deploying enough OpenStack to get > a self hosting ironic working. > > Side note: no, it's not. What you describe is similarly hard to installing > standalone ironic from scratch and much harder than using bifrost for > everything. Especially when you try to do it in production. Especially with > unusual operating requirements ("no TFTP servers on my network"). > > Also, sorry, I cannot resist: > "Guess what you need to orchestrate containers? Just a few things. A > container > runtime. Docker works well. some remove execution tooling. ansible works > pretty > well in practice. It is certainly much much easier then deploying enough > k8s to > get a self hosting containers orchestration working." > > Such oversimplications won't bring us anywhere. Sometimes things are hard > because they ARE hard. Where are people complaining that installing a full > GNU/Linux distributions from upstream tarballs is hard? How many operators > here > use LFS as their distro? If we are okay with using a distro for GNU/Linux, > why > using a distro for OpenStack causes so much contention? > > > > > Thanks, > > Kevin > > > > ________________________________________ > > From: Jay Pipes [jaypipes at gmail.com] > > Sent: Tuesday, July 03, 2018 10:06 AM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > > > On 07/02/2018 03:31 PM, Zane Bitter wrote: > >> On 28/06/18 15:09, Fox, Kevin M wrote: > >>> * made the barrier to testing/development as low as 'curl > >>> http://......minikube; minikube start' (this spurs adoption and > >>> contribution) > >> > >> That's not so different from devstack though. > >> > >>> * not having large silo's in deployment projects allowed better > >>> communication on common tooling. > >>> * Operator focused architecture, not project based architecture. > >>> This simplifies the deployment situation greatly. > >>> * try whenever possible to focus on just the commons and push vendor > >>> specific needs to plugins so vendors can deal with vendor issues > >>> directly and not corrupt the core. > >> > >> I agree with all of those, but to be fair to OpenStack, you're leaving > >> out arguably the most important one: > >> > >> * Installation instructions start with "assume a working > datacenter" > >> > >> They have that luxury; we do not. (To be clear, they are 100% right to > >> take full advantage of that luxury. Although if there are still folks > >> who go around saying that it's a trivial problem and OpenStackers must > >> all be idiots for making it look so difficult, they should really stop > >> embarrassing themselves.) > > > > This. > > > > There is nothing trivial about the creation of a working datacenter -- > > never mind a *well-running* datacenter. Comparing Kubernetes to > > OpenStack -- particular OpenStack's lower levels -- is missing this > > fundamental point and ends up comparing apples to oranges. > > > > Best, > > -jay > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Thu Jul 5 18:23:32 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 05 Jul 2018 14:23:32 -0400 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: > > I would almost rather see us organize the directories by service > name/project instead of implementation. > > Instead of: > > puppet/services/nova-api.yaml > puppet/services/nova-conductor.yaml > docker/services/nova-api.yaml > docker/services/nova-conductor.yaml > > We'd have: > > services/nova/nova-api-puppet.yaml > services/nova/nova-conductor-puppet.yaml > services/nova/nova-api-docker.yaml > services/nova/nova-conductor-docker.yaml > > (or perhaps even another level of directories to indicate > puppet/docker/ansible?) I'd be open to this but doing changes on this scale is a much larger developer and user impact than what I was thinking we would be willing to entertain for the issue that caused me to bring this up (i.e. how to identify services which get configured by Ansible). Its also worth noting that many projects keep these sorts of things in different repos too. Like Kolla fully separates kolla-ansible and kolla-kubernetes as they are quite divergent. We have been able to preserve some of our common service architectures but as things move towards kubernetes we may which to change things structurally a bit too. Dan From melwittt at gmail.com Thu Jul 5 18:55:47 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 5 Jul 2018 11:55:47 -0700 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: +openstack-dev@ On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: >> But, I can not use nova command, endpoint nova have been redirected from https to http. Here:http://prntscr.com/k2e8s6 (command: nova –insecure service list) > First of all, it seems that the nova client is hitting /v2.1 instead of /v2.1/ URI and this seems to be triggering the redirect. > > Since openstack CLI works, I presume it must be using the correct URL and hence it’s not getting redirected. > >> >> And this is error log: Unable to establish connection tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",)) >> > Looks to me that nova-api does a redirect to an absolute URL. I suspect SSL is terminated on the HAProxy and nova-api itself is configured without SSL so it redirects to an http URL. > > In my opinion, nova would be more load-balancer friendly if it used a relative URI in the redirect but that’s outside of the scope of this question and since I don’t know the context behind choosing the absolute URL, I could be wrong on that. Thanks for mentioning this. We do have a bug open in python-novaclient around a similar issue [1]. I've added comments based on this thread and will consult with the API subteam to see if there's something we can do about this in nova-api. -melanie [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928 From mordred at inaugust.com Thu Jul 5 20:10:08 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 5 Jul 2018 15:10:08 -0500 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: On 07/05/2018 01:55 PM, melanie witt wrote: > +openstack-dev@ > > On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: >>> But, I can not use nova command, endpoint nova have been redirected >>> from https to http. Here:http://prntscr.com/k2e8s6  (command: nova >>> –insecure service list) >> First of all, it seems that the nova client is hitting /v2.1 instead >> of /v2.1/ URI and this seems to be triggering the redirect. >> >> Since openstack CLI works, I presume it must be using the correct URL >> and hence it’s not getting redirected. >> >>> And this is error log: Unable to establish connection >>> tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', >>> BadStatusLine("''",)) >> Looks to me that nova-api does a redirect to an absolute URL. I >> suspect SSL is terminated on the HAProxy and nova-api itself is >> configured without SSL so it redirects to an http URL. >> >> In my opinion, nova would be more load-balancer friendly if it used a >> relative URI in the redirect but that’s outside of the scope of this >> question and since I don’t know the context behind choosing the >> absolute URL, I could be wrong on that. > > Thanks for mentioning this. We do have a bug open in python-novaclient > around a similar issue [1]. I've added comments based on this thread and > will consult with the API subteam to see if there's something we can do > about this in nova-api. A similar thing came up the other day related to keystone and version discovery. Version discovery documents tend to return full urls - even though relative urls would make public/internal API endpoints work better. (also, sometimes people don't configure things properly and the version discovery url winds up being incorrect) In shade/sdk - we actually construct a wholly-new discovery url based on the url used for the catalog and the url in the discovery document since we've learned that the version discovery urls are frequently broken. This is problematic because SOMETIMES people have public urls deployed as a sub-url and internal urls deployed on a port - so you have: Catalog: public: https://example.com/compute internal: https://compute.example.com:1234 Version discovery: https://example.com/compute/v2.1 When we go to combine the catalog url and the versioned url, if the user is hitting internal, we product https://compute.example.com:1234/compute/v2.1 - because we have no way of systemically knowing that /compute should also be stripped. VERY LONG WINDED WAY of saying 2 things: a) Relative URLs would be *way* friendlier (and incidentally are supported by keystoneauth, openstacksdk and shade - and are written up as being a thing people *should* support in the documents about API consumption) b) Can we get agreement that changing behavior to return or redirect to a relative URL would not be considered an api contract break? (it's possible the answer to this is 'no' - so it's a real question) Monty From Kevin.Fox at pnnl.gov Thu Jul 5 20:17:08 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 5 Jul 2018 20:17:08 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C144EA2@EX10MBOX03.pnnl.gov> I use RDO in production. Its pretty far from RedHat OpenStack. though its been a while since I tried the TripleO part of RDO. Is it pretty well integrated now? Similar to RedHat OpenStack? or is it more Fedora like then CentOS like? Thanks, Kevin ________________________________ From: Dmitry Tantsur [dtantsur at redhat.com] Sent: Thursday, July 05, 2018 11:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On Thu, Jul 5, 2018, 19:31 Fox, Kevin M > wrote: We're pretty far into a tangent... /me shrugs. I've done it. It can work. Some things your right. deploying k8s is more work then deploying ansible. But what I said depends on context. If your goal is to deploy k8s/manage k8s then having to learn how to use k8s is not a big ask. adding a different tool such as ansible is an extra cognitive dependency. Deploying k8s doesn't need a general solution to deploying generic base OS's. Just enough OS to deploy K8s and then deploy everything on top in containers. Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a solution here to provide generic provisioning to every use case in the datacenter. But enough to get a k8s based cluster up and self hosted enough where you could launch other provisioning/management tools in that same cluster, if you need that. It provides a solid base for the datacenter on which you can easily add the services you need for dealing with everything. All of the microservices I mentioned can be wrapped up in a single helm chart and deployed with a single helm install command. I don't have permission to release anything at the moment, so I can't prove anything right now. So, take my advice with a grain of salt. :) Switching gears, you said why would users use lfs when they can use a distro, so why use openstack without a distro. I'd say, today unless you are paying a lot, there isn't really an equivalent distro that isn't almost as much effort as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though I think TripleO has been making progress on this front... It's RDO what you're looking for (equivalent of centos). TripleO is an installer project, not a distribution. Anyway. This thread is I think 2 tangents away from the original topic now. If folks are interested in continuing this discussion, lets open a new thread. Thanks, Kevin ________________________________________ From: Dmitry Tantsur [dtantsur at redhat.com] Sent: Wednesday, July 04, 2018 4:24 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 Tried hard to avoid this thread, but this message is so much wrong.. On 07/03/2018 09:48 PM, Fox, Kevin M wrote: > I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly hard. In fact, it is easier then you might think. k8s is a platform for deploying/managing services. Guess what you need to provision bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe infrastructure. pixiecore with a simple http backend works pretty well in practice. a service to provide installation instructions. nginx server handing out kickstart files for example. and a place to fetch rpms from in case you don't have internet access or want to ensure uniformity. nginx server with a mirror yum repo. Its even possible to seed it on minikube and sluff it off to its own cluster. > > The main hard part about it is currently no one is shipping a reference implementation of the above. That may change... > > It is certainly much much easier then deploying enough OpenStack to get a self hosting ironic working. Side note: no, it's not. What you describe is similarly hard to installing standalone ironic from scratch and much harder than using bifrost for everything. Especially when you try to do it in production. Especially with unusual operating requirements ("no TFTP servers on my network"). Also, sorry, I cannot resist: "Guess what you need to orchestrate containers? Just a few things. A container runtime. Docker works well. some remove execution tooling. ansible works pretty well in practice. It is certainly much much easier then deploying enough k8s to get a self hosting containers orchestration working." Such oversimplications won't bring us anywhere. Sometimes things are hard because they ARE hard. Where are people complaining that installing a full GNU/Linux distributions from upstream tarballs is hard? How many operators here use LFS as their distro? If we are okay with using a distro for GNU/Linux, why using a distro for OpenStack causes so much contention? > > Thanks, > Kevin > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Tuesday, July 03, 2018 10:06 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > On 07/02/2018 03:31 PM, Zane Bitter wrote: >> On 28/06/18 15:09, Fox, Kevin M wrote: >>> * made the barrier to testing/development as low as 'curl >>> http://......minikube; minikube start' (this spurs adoption and >>> contribution) >> >> That's not so different from devstack though. >> >>> * not having large silo's in deployment projects allowed better >>> communication on common tooling. >>> * Operator focused architecture, not project based architecture. >>> This simplifies the deployment situation greatly. >>> * try whenever possible to focus on just the commons and push vendor >>> specific needs to plugins so vendors can deal with vendor issues >>> directly and not corrupt the core. >> >> I agree with all of those, but to be fair to OpenStack, you're leaving >> out arguably the most important one: >> >> * Installation instructions start with "assume a working datacenter" >> >> They have that luxury; we do not. (To be clear, they are 100% right to >> take full advantage of that luxury. Although if there are still folks >> who go around saying that it's a trivial problem and OpenStackers must >> all be idiots for making it look so difficult, they should really stop >> embarrassing themselves.) > > This. > > There is nothing trivial about the creation of a working datacenter -- > never mind a *well-running* datacenter. Comparing Kubernetes to > OpenStack -- particular OpenStack's lower levels -- is missing this > fundamental point and ends up comparing apples to oranges. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Thu Jul 5 20:20:59 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 5 Jul 2018 20:20:59 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <20180705174725.hq4czbw2ldzdh2dn@yuggoth.org> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov>, <20180705174725.hq4czbw2ldzdh2dn@yuggoth.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C144EB5@EX10MBOX03.pnnl.gov> Interesting. Thanks for the link. :) There is a lot of stuff there, so not sure it covers the part I'm talking about without more review. but if it doesn't it would be pretty easy to add by the looks of it. Kevin ________________________________________ From: Jeremy Stanley [fungi at yuggoth.org] Sent: Thursday, July 05, 2018 10:47 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 2018-07-05 17:30:23 +0000 (+0000), Fox, Kevin M wrote: [...] > Deploying k8s doesn't need a general solution to deploying generic > base OS's. Just enough OS to deploy K8s and then deploy everything > on top in containers. Deploying a seed k8s with minikube is pretty > trivial. I'm not suggesting a solution here to provide generic > provisioning to every use case in the datacenter. But enough to > get a k8s based cluster up and self hosted enough where you could > launch other provisioning/management tools in that same cluster, > if you need that. It provides a solid base for the datacenter on > which you can easily add the services you need for dealing with > everything. > > All of the microservices I mentioned can be wrapped up in a single > helm chart and deployed with a single helm install command. > > I don't have permission to release anything at the moment, so I > can't prove anything right now. So, take my advice with a grain of > salt. :) [...] Anything like http://www.airshipit.org/ ? -- Jeremy Stanley From doug at doughellmann.com Thu Jul 5 20:46:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 05 Jul 2018 16:46:21 -0400 Subject: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox Message-ID: <1530823071-sup-2420@lrrr.local> I have a governance patch up [1] to change the project-testing-interface (PTI) for building documentation to restore the use of tox. We originally changed away from tox because we wanted to have a single standard command that anyone could use to build the documentation for a project. It turns out that is more complicated than just running sphinx-build in a lot of cases anyway, because of course you have a bunch of dependencies to install before sphinx-build will work. Updating the job that uses sphinx directly to run under python 3, while allowing the transition to be self-testing, was going to require writing some extra complexity to look at something in the repository to decide what version of python to use. Since tox handles that for us by letting us set basepython in the virtualenv configuration, it seemed more straightforward to go back to using tox. So, this new PTI definition restores the use of tox and specifies a "docs" environment. I have started defining the relevant jobs [2] and project templates [3], and I will be updating the python3-first transition plan as well. Let me know if you have any questions about any of that, Doug [1] https://review.openstack.org/#/c/580495/ [2] https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:python3-first [3] https://review.openstack.org/#/q/project:openstack-infra/openstack-zuul-jobs+topic:python3-first From soulxu at gmail.com Fri Jul 6 02:03:16 2018 From: soulxu at gmail.com (Alex Xu) Date: Fri, 6 Jul 2018 10:03:16 +0800 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: 2018-07-06 2:55 GMT+08:00 melanie witt : > +openstack-dev@ > > On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: > >> But, I can not use nova command, endpoint nova have been redirected from >>> https to http. Here:http://prntscr.com/k2e8s6 (command: nova –insecure >>> service list) >>> >> First of all, it seems that the nova client is hitting /v2.1 instead of >> /v2.1/ URI and this seems to be triggering the redirect. >> >> Since openstack CLI works, I presume it must be using the correct URL and >> hence it’s not getting redirected. >> >> And this is error log: Unable to establish connection tohttp:// >>> 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",)) >>> >>> >> Looks to me that nova-api does a redirect to an absolute URL. I suspect >> SSL is terminated on the HAProxy and nova-api itself is configured without >> SSL so it redirects to an http URL. >> >> In my opinion, nova would be more load-balancer friendly if it used a >> relative URI in the redirect but that’s outside of the scope of this >> question and since I don’t know the context behind choosing the absolute >> URL, I could be wrong on that. >> > > Thanks for mentioning this. We do have a bug open in python-novaclient > around a similar issue [1]. I've added comments based on this thread and > will consult with the API subteam to see if there's something we can do > about this in nova-api. > > Emm...check with the RFC, it said the value of Location header is absolute URL https://tools.ietf.org/html/rfc2616.html#section-14.30 > -melanie > > [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Fri Jul 6 02:30:15 2018 From: soulxu at gmail.com (Alex Xu) Date: Fri, 6 Jul 2018 10:30:15 +0800 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: 2018-07-06 10:03 GMT+08:00 Alex Xu : > > > 2018-07-06 2:55 GMT+08:00 melanie witt : > >> +openstack-dev@ >> >> On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: >> >>> But, I can not use nova command, endpoint nova have been redirected from >>>> https to http. Here:http://prntscr.com/k2e8s6 (command: nova >>>> –insecure service list) >>>> >>> First of all, it seems that the nova client is hitting /v2.1 instead of >>> /v2.1/ URI and this seems to be triggering the redirect. >>> >>> Since openstack CLI works, I presume it must be using the correct URL >>> and hence it’s not getting redirected. >>> >>> And this is error log: Unable to establish connection tohttp:// >>>> 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",)) >>>> >>>> >>> Looks to me that nova-api does a redirect to an absolute URL. I suspect >>> SSL is terminated on the HAProxy and nova-api itself is configured without >>> SSL so it redirects to an http URL. >>> >>> In my opinion, nova would be more load-balancer friendly if it used a >>> relative URI in the redirect but that’s outside of the scope of this >>> question and since I don’t know the context behind choosing the absolute >>> URL, I could be wrong on that. >>> >> >> Thanks for mentioning this. We do have a bug open in python-novaclient >> around a similar issue [1]. I've added comments based on this thread and >> will consult with the API subteam to see if there's something we can do >> about this in nova-api. >> >> > Emm...check with the RFC, it said the value of Location header is absolute > URL https://tools.ietf.org/html/rfc2616.html#section-14.30 > Sorry, correct that. the RFC7231 updated that. The relativeURL is ok. https://tools.ietf.org/html/rfc7231#section-7.1.2 > > >> -melanie >> >> [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928 >> >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Fri Jul 6 02:43:26 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Fri, 6 Jul 2018 02:43:26 +0000 Subject: [openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent Message-ID: <47EFB32CD8770A4D9590812EE28C977E9637B6A9@ALA-MBD.corp.ad.wrs.com> Thanks Tushar Patil for the +1 for 547118. In regards of the following review: https://review.openstack.org/#/c/534958/ Any more comment? Thanks. Louie ________________________________________ From: Tushar Patil (Code Review) [review at openstack.org] Sent: Tuesday, July 03, 2018 8:48 PM To: Kwan, Louie Cc: Tim Bell; zhangyanying; Waines, Greg; Li Yingjun; wangqiang-bj; Tushar Patil; Ken Young; NTT system-fault-ci masakari-integration-ci; wangqiang; Abhishek Kekane; takahara.kengo; Rikimaru Honjo; Adam Spiers; Sampath Priyankara (samP); Dinesh Bhor Subject: Change in openstack/masakari[master]: Introspective Instance Monitoring through QEMU Guest Agent Tushar Patil has posted comments on this change. ( https://review.openstack.org/547118 ) Change subject: Introspective Instance Monitoring through QEMU Guest Agent ...................................................................... Patch Set 3: Workflow+1 -- To view, visit https://review.openstack.org/547118 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I9efc6afc8d476003d3aa7fee8c31bcaa65438674 Gerrit-PatchSet: 3 Gerrit-Project: openstack/masakari Gerrit-Branch: master Gerrit-Owner: Louie Kwan Gerrit-Reviewer: Abhishek Kekane Gerrit-Reviewer: Adam Spiers Gerrit-Reviewer: Dinesh Bhor Gerrit-Reviewer: Greg Waines Gerrit-Reviewer: Hieu LE Gerrit-Reviewer: Ken Young Gerrit-Reviewer: Li Yingjun Gerrit-Reviewer: Louie Kwan Gerrit-Reviewer: NTT system-fault-ci masakari-integration-ci Gerrit-Reviewer: Rikimaru Honjo Gerrit-Reviewer: Sampath Priyankara (samP) Gerrit-Reviewer: Tim Bell Gerrit-Reviewer: Tushar Patil Gerrit-Reviewer: Tushar Patil Gerrit-Reviewer: Zuul Gerrit-Reviewer: takahara.kengo Gerrit-Reviewer: wangqiang Gerrit-Reviewer: wangqiang-bj Gerrit-Reviewer: zhangyanying Gerrit-HasComments: No From gmann at ghanshyammann.com Fri Jul 6 03:01:23 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Jul 2018 12:01:23 +0900 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: <1646d8977e9.c16cb88422919.6296983684506657969@ghanshyammann.com> ---- On Fri, 06 Jul 2018 11:30:15 +0900 Alex Xu wrote ---- > > > 2018-07-06 10:03 GMT+08:00 Alex Xu : > > > 2018-07-06 2:55 GMT+08:00 melanie witt : > +openstack-dev@ > > On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: > But, I can not use nova command, endpoint nova have been redirected from https to http. Here:http://prntscr.com/k2e8s6 (command: nova –insecure service list) > First of all, it seems that the nova client is hitting /v2.1 instead of /v2.1/ URI and this seems to be triggering the redirect. > > Since openstack CLI works, I presume it must be using the correct URL and hence it’s not getting redirected. > > And this is error log: Unable to establish connection tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",)) > > Looks to me that nova-api does a redirect to an absolute URL. I suspect SSL is terminated on the HAProxy and nova-api itself is configured without SSL so it redirects to an http URL. > > In my opinion, nova would be more load-balancer friendly if it used a relative URI in the redirect but that’s outside of the scope of this question and since I don’t know the context behind choosing the absolute URL, I could be wrong on that. > > Thanks for mentioning this. We do have a bug open in python-novaclient around a similar issue [1]. I've added comments based on this thread and will consult with the API subteam to see if there's something we can do about this in nova-api. We can support both URL for version API in that case ( /v2.1 and /v2.1/ ). Redirect from relative to obsolete can be changed to map '' to 'GET': [version_controller, 'show'] route, something like [1]. [1] https://review.openstack.org/#/c/580544/ -gmann > > > Emm...check with the RFC, it said the value of Location header is absolute URL https://tools.ietf.org/html/rfc2616.html#section-14.30 > Sorry, correct that. the RFC7231 updated that. The relativeURL is ok. https://tools.ietf.org/html/rfc7231#section-7.1.2 -melanie > > [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yjf1970231893 at gmail.com Fri Jul 6 03:01:53 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Fri, 6 Jul 2018 11:01:53 +0800 Subject: [openstack-dev] [octavia] Some tips about amphora driver Message-ID: Recently, my team plans to provider load balancing services with octavia.I recorded some of the needs and suggestions of our team members.The following suggestions about amphora may be very useful. [1] User can specify image and flavor for amphora. [2] Enable multi processes(version<1.8) or multi threads(version>=1.8) for haproxy [3] Provider a script to check and clean up bad loadbalancer and amphora. Moreover we alse need to clean up neutron and nova resources about these loadblancer and amphora. The implementation of [1] and [2] depend on provider flavor framework. So it's time to implement provider flavor framework. About [3], We can't delete loadbalancer by API if the loadbalancer's status is PENDING_UPDATE or PENDING_CREATE. And we haven't api for delete amphora, so if the status of this amphora is not active it will always exists. So the script is necessary. https://storyboard.openstack.org/#!/story/2002896 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tushar.vitthal.patil at gmail.com Fri Jul 6 04:53:11 2018 From: tushar.vitthal.patil at gmail.com (Tushar Patil) Date: Fri, 6 Jul 2018 13:53:11 +0900 Subject: [openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E9637B6A9@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9637B6A9@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Louie, I will check the updated patch and reply before next IRC meeting. Thank you for your patience. Regards, Tushar Patil On Fri, Jul 6, 2018 at 11:43 AM Kwan, Louie wrote: > Thanks Tushar Patil for the +1 for 547118. > > In regards of the following review: > https://review.openstack.org/#/c/534958/ > > Any more comment? > > Thanks. > Louie > ________________________________________ > From: Tushar Patil (Code Review) [review at openstack.org] > Sent: Tuesday, July 03, 2018 8:48 PM > To: Kwan, Louie > Cc: Tim Bell; zhangyanying; Waines, Greg; Li Yingjun; wangqiang-bj; Tushar > Patil; Ken Young; NTT system-fault-ci masakari-integration-ci; wangqiang; > Abhishek Kekane; takahara.kengo; Rikimaru Honjo; Adam Spiers; Sampath > Priyankara (samP); Dinesh Bhor > Subject: Change in openstack/masakari[master]: Introspective Instance > Monitoring through QEMU Guest Agent > > Tushar Patil has posted comments on this change. ( > https://review.openstack.org/547118 ) > > Change subject: Introspective Instance Monitoring through QEMU Guest Agent > ...................................................................... > > > Patch Set 3: Workflow+1 > > -- > To view, visit https://review.openstack.org/547118 > To unsubscribe, visit https://review.openstack.org/settings > > Gerrit-MessageType: comment > Gerrit-Change-Id: I9efc6afc8d476003d3aa7fee8c31bcaa65438674 > Gerrit-PatchSet: 3 > Gerrit-Project: openstack/masakari > Gerrit-Branch: master > Gerrit-Owner: Louie Kwan > Gerrit-Reviewer: Abhishek Kekane > Gerrit-Reviewer: Adam Spiers > Gerrit-Reviewer: Dinesh Bhor > Gerrit-Reviewer: Greg Waines > Gerrit-Reviewer: Hieu LE > Gerrit-Reviewer: Ken Young > Gerrit-Reviewer: Li Yingjun > Gerrit-Reviewer: Louie Kwan > Gerrit-Reviewer: NTT system-fault-ci masakari-integration-ci < > masakari.integration.test at gmail.com> > Gerrit-Reviewer: Rikimaru Honjo > Gerrit-Reviewer: Sampath Priyankara (samP) > Gerrit-Reviewer: Tim Bell > Gerrit-Reviewer: Tushar Patil > Gerrit-Reviewer: Tushar Patil > Gerrit-Reviewer: Zuul > Gerrit-Reviewer: takahara.kengo > Gerrit-Reviewer: wangqiang > Gerrit-Reviewer: wangqiang-bj > Gerrit-Reviewer: zhangyanying > Gerrit-HasComments: No > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Fri Jul 6 04:59:03 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 6 Jul 2018 06:59:03 +0200 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> Message-ID: [snip] > > I would almost rather see us organize the directories by service > name/project instead of implementation. > > Instead of: > > puppet/services/nova-api.yaml > puppet/services/nova-conductor.yaml > docker/services/nova-api.yaml > docker/services/nova-conductor.yaml > > We'd have: > > services/nova/nova-api-puppet.yaml > services/nova/nova-conductor-puppet.yaml > services/nova/nova-api-docker.yaml > services/nova/nova-conductor-docker.yaml I'd also go for that one - it would be clearer and easier to search when one wants to see how the service is configured, displaying all implem for given service. The current tree is a bit unusual. > > (or perhaps even another level of directories to indicate > puppet/docker/ansible?) > > Personally, such an organization is something I'm more used to. It > feels more similar to how most would expect a puppet module or ansible > role to be organized, where you have the abstraction (service > configuration) at a higher directory level than specific > implementations. > > It would also lend itself more easily to adding implementations only > for specific services, and address the question of if a new top level > implementation directory needs to be created. For example, adding a > services/nova/nova-api-chef.yaml seems a lot less contentious than > adding a top level chef/services/nova-api.yaml. True. Easier to add new deployment ways, and probably easier to search. > > It'd also be nice if we had a way to mark the default within a given > service's directory. Perhaps services/nova/nova-api-default.yaml, > which would be a new template that just consumes the default? Or > perhaps a symlink, although it was pointed out symlinks don't work in > swift containers. Still, that could possibly be addressed in our plan > upload workflows. Then the resource-registry would point at > nova-api-default.yaml. One could easily tell which is the default > without having to cross reference with the resource-registry. +42 for a way to get the default implem - a template that just consume the right one should be enough and self-explanatory. Having a tree based on services instead of implem will allow that in an easy way. > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lijie at unitedstack.com Fri Jul 6 08:18:36 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Fri, 6 Jul 2018 16:18:36 +0800 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: References: <20180702072003.GA3755@redhat> Message-ID: Does the "OSC“ meas the osc placement? ------------------ Original ------------------ From: "Matt Riedemann"; Date: Mon, Jul 2, 2018 10:36 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [nova] about filter the flavor On 7/2/2018 2:43 AM, 李杰 wrote: > Oh,sorry,not this means,in my opinion,we could filter the flavor in > flavor list.such as the cli:openstack flavor list --property key:value. There is no support for natively filtering flavors by extra specs in the compute REST API so that would have to be added with a microversion (if we wanted to add that support). So it would require a nova spec, which would be reviewed for consideration at the earliest in the Stein release. OSC could do client-side filtering if it wanted. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Fri Jul 6 11:21:24 2018 From: knikolla at bu.edu (Kristi Nikolla) Date: Fri, 6 Jul 2018 07:21:24 -0400 Subject: [openstack-dev] [nova] about filter the flavor In-Reply-To: References: <20180702072003.GA3755@redhat> Message-ID: OSC here refers to OpenStackClient [0]. [0]. https://docs.openstack.org/python-openstackclient/latest/ On Fri, Jul 6, 2018 at 4:44 AM Rambo wrote: > > Does the "OSC“ meas the osc placement? > > > ------------------ Original ------------------ > From: "Matt Riedemann"; > Date: Mon, Jul 2, 2018 10:36 PM > To: "OpenStack Developmen"; > Subject: Re: [openstack-dev] [nova] about filter the flavor > > On 7/2/2018 2:43 AM, 李杰 wrote: > > Oh,sorry,not this means,in my opinion,we could filter the flavor in > > flavor list.such as the cli:openstack flavor list --property key:value. > > There is no support for natively filtering flavors by extra specs in the > compute REST API so that would have to be added with a microversion (if > we wanted to add that support). So it would require a nova spec, which > would be reviewed for consideration at the earliest in the Stein > release. OSC could do client-side filtering if it wanted. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From knikolla at bu.edu Fri Jul 6 11:28:53 2018 From: knikolla at bu.edu (Kristi Nikolla) Date: Fri, 6 Jul 2018 07:28:53 -0400 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: On Thu, Jul 5, 2018 at 4:11 PM Monty Taylor wrote: > > On 07/05/2018 01:55 PM, melanie witt wrote: > > +openstack-dev@ > > > > On Wed, 4 Jul 2018 14:50:26 +0000, Bogdan Katynski wrote: > >>> But, I can not use nova command, endpoint nova have been redirected > >>> from https to http. Here:http://prntscr.com/k2e8s6 (command: nova > >>> –insecure service list) > >> First of all, it seems that the nova client is hitting /v2.1 instead > >> of /v2.1/ URI and this seems to be triggering the redirect. > >> > >> Since openstack CLI works, I presume it must be using the correct URL > >> and hence it’s not getting redirected. > >> > >>> And this is error log: Unable to establish connection > >>> tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', > >>> BadStatusLine("''",)) > >> Looks to me that nova-api does a redirect to an absolute URL. I > >> suspect SSL is terminated on the HAProxy and nova-api itself is > >> configured without SSL so it redirects to an http URL. > >> > >> In my opinion, nova would be more load-balancer friendly if it used a > >> relative URI in the redirect but that’s outside of the scope of this > >> question and since I don’t know the context behind choosing the > >> absolute URL, I could be wrong on that. > > > > Thanks for mentioning this. We do have a bug open in python-novaclient > > around a similar issue [1]. I've added comments based on this thread and > > will consult with the API subteam to see if there's something we can do > > about this in nova-api. > > A similar thing came up the other day related to keystone and version > discovery. Version discovery documents tend to return full urls - even > though relative urls would make public/internal API endpoints work > better. (also, sometimes people don't configure things properly and the > version discovery url winds up being incorrect) > > In shade/sdk - we actually construct a wholly-new discovery url based on > the url used for the catalog and the url in the discovery document since > we've learned that the version discovery urls are frequently broken. > > This is problematic because SOMETIMES people have public urls deployed > as a sub-url and internal urls deployed on a port - so you have: > > Catalog: > public: https://example.com/compute > internal: https://compute.example.com:1234 > > Version discovery: > https://example.com/compute/v2.1 > > When we go to combine the catalog url and the versioned url, if the user > is hitting internal, we product > https://compute.example.com:1234/compute/v2.1 - because we have no way > of systemically knowing that /compute should also be stripped. > > VERY LONG WINDED WAY of saying 2 things: > > a) Relative URLs would be *way* friendlier (and incidentally are > supported by keystoneauth, openstacksdk and shade - and are written up > as being a thing people *should* support in the documents about API > consumption) > > b) Can we get agreement that changing behavior to return or redirect to > a relative URL would not be considered an api contract break? (it's > possible the answer to this is 'no' - so it's a real question) If the answer is 'no', can we find a process that gets us there? Or are we doomed by the inability to version the version document? > > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Fri Jul 6 12:41:19 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 6 Jul 2018 07:41:19 -0500 Subject: [openstack-dev] [release] Release countdown for week R-7, July 9-13 Message-ID: <20180706124119.GA18399@sm-workstation> Welcome to our weekly release countdown. Development Focus ----------------- Teams should be focused on implementing planned work for the cycle and bug fixes. General Information ------------------- The deadline for extra ATC's is on July 13. If there is someone that contributes to your project in a way that is not reflected by the usual metrics. Extra-ATCs can be added by submitting an update to the reference/projects.yaml file in the openstack/governance repo. As we get closer to the end of the cycle, we have deadlines coming up for client and non-client libraries to ensure any dependency issues are worked out and we have time to make any critical fixes before the final release candidates. To this end, it is good practice to release libraries throughout the cycle once they have accumulated any significant functional changes. The following libraries appear to have some merged changes that have not been release that could potentially impact consumers of the library. It would be good to consider getting these released ahead of the deadline to make sure the changes have some run time: openstack/osc-placement openstack/oslo.messaging openstack/ovsdbapp openstack/python-brick-cinderclient-ext openstack/python-magnumclient openstack/python-novaclient openstack/python-qinlingclient openstack/python-swiftclient openstack/python-tripleoclient openstack/sushy Stein Release Schedule -------------------------------- As some of you may be aware, there is discussion underway about changing the Summit and PTG events starting in 2019. With that in mind, we have a draft schedule for the Stein release proposed to be able to see what it might look like to adjust to the expected changes. Please take a look at the possible schedule and let us know if you see any major conflicts due to holidays or other events that we have not accounted for: http://logs.openstack.org/94/575794/1/check/build-openstack-sphinx-docs/b522825/html/stein/schedule.html Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From johnsomor at gmail.com Fri Jul 6 14:03:55 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 6 Jul 2018 07:03:55 -0700 Subject: [openstack-dev] [octavia] Some tips about amphora driver In-Reply-To: References: Message-ID: Hi Jeff, Thank you for your comments. I will reply on the story. Michael On Thu, Jul 5, 2018 at 8:02 PM Jeff Yang wrote: > > Recently, my team plans to provider load balancing services with octavia.I recorded some of the needs and suggestions of our team members.The following suggestions about amphora may be very useful. > > [1] User can specify image and flavor for amphora. > [2] Enable multi processes(version<1.8) or multi threads(version>=1.8) for haproxy > [3] Provider a script to check and clean up bad loadbalancer and amphora. Moreover we alse need to clean up neutron and nova resources about these loadblancer and amphora. > > The implementation of [1] and [2] depend on provider flavor framework. So it's time to implement provider flavor framework. > About [3], We can't delete loadbalancer by API if the loadbalancer's status is PENDING_UPDATE or PENDING_CREATE. And we haven't api for delete amphora, so if the status of this amphora is not active it will always exists. So the script is necessary. > > https://storyboard.openstack.org/#!/story/2002896 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Jul 6 14:09:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Jul 2018 15:09:13 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-27 Message-ID: HTML: https://anticdent.org/placement-update-18-27.html This is placement update 18-27, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This is a contract version. # Most Important In the past week or so we've found a two suites of bugs that are holding up other work. One set is related to consumers and the handling of consumer generations (linked in that theme, below). The other is related to various ways in which managing parents of nested providers is not correct. Those are: * * The first is already fixed, the second was discovered as a result of thinking about the first. We also have some open questions about which of consumer id, project id, and user id are definitely going to be a valid UUID and what that means in relation to enforcement and our definition of "what's a good uuid": * As usual, this is more support for the fact that we need to be doing increased manual testing to find where our automated tests have gaps, and fill them. On themes, we have several things rushing for attention before the end of the cycle (reminder: Feature Freeze is the end of this month). We need to get the various facets of consumers fixed up in a way that we all agree is correct. We need to get the Reshaped Providers implemented. And there's some hope (maybe vain?) that we can get the report client and virt drivers talking in a more nested and shared form. # What's Changed The microversion for nested allocation candidates has merged as 1.29. The huge pile of osc-placement changes at has merged. Yay! # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16, -3 on last week. * [In progress placement bugs](https://goo.gl/vzGGDQ) 17, +7 on last week. # Questions * Will consumer id, project and user id always be a UUID? We've established for certain that user id will not, but things are less clear for the other two. This issue is compounded by the fact that these two strings are different but the same UUID: 5eb033fd-c550-420e-a31c-3ec2703a403c, 5eb033fdc550420ea31c3ec2703a403c (bug 1758057 mentioned above) but we treat them differently in our code. # Main Themes ## Documentation This is a section for reminding us to document all the fun stuff we are enabling. Open areas include: * Documenting optional placement database. A bug, [1778227](https://bugs.launchpad.net/nova/+bug/1778227) has been created to track this. This has started, for the install docs, but there are more places that need to be touched. * "How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?!", says an enthusiastic Matt Riedemann. * The whens and wheres of re-shaping and VGPUs. ## Nested providers in allocation candidates The main code of this has merged. What's left are dealing with things like the parenting bugs mentioned above, and actually reporting any nested providers and inventory so we can make use of them. ## Consumer Generations A fair bit of open bugs fixes and debate on this stuff. * No ability to update consumer's project and/or user external ID * Consumers never get deleted * Add UUID validation for consumer_uuid (This one has some of the discussion about whether consumers are always going to be UUIDs) * move lookup of provider from _new_allocations() * return 404 when no consumer found in allocs Note that once this is correct we'll still have work to do in the report client to handle consumer generations before nova can do anything with it. ## Reshape Provider Trees This allows moving inventory and allocations that were on resource provider A to resource provider B in an atomic fashion. The blueprint topic is: * There are WIPs for the HTTP parts and the resource tracker parts, on that topic. ## Mirror Host Aggregates This needs a command line tool: * ## Extraction Extraction is mostly taking a back seat at the moment while we find and fix bugs in existing features. We've also done quite a lot of the preparatory work. The main things to be thinking about are: * os-resource-classes * infra and co-gating issues that are going to come up * copying whatever nova-based test fixture we might like # Other 24 entries last week. 20 now (this is a contract week, there's plenty of new reviews not listed here). * Purge comp_node and res_prvdr records during deletion of cells/hosts * Get resource provider by uuid or name (osc-placement) * Tighten up ReportClient use of generation * Add unit test for non-placement resize * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams * [placement] api-ref: add traits parameter * Convert 'placement_api_docs' into a Sphinx extension * Test for multiple limit/group_policy qparams * Disable limits if force_hosts or force_nodes is set * Rename auth_uri to www_authenticate_uri * Blazar's work on using placement # End You are the key to the coming revolution. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From amy at demarco.com Fri Jul 6 14:17:47 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 6 Jul 2018 09:17:47 -0500 Subject: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources In-Reply-To: References: Message-ID: Hey, Forwarding to the Dev list as you may get a better response from there. Thanks, Amy (spotz) On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron < Keynes_Lee at wistron.com> wrote: > Hi > > > > When making “cinder backup-create” > > We found the process “cinder-backup” use 100% util of 1 CPU core on an > OpenStack Controller node. > > It not just causes a bad backup performance, also make the > openstack-cinder-backup unstable. > > Especially when we make several backup at the same time. > > > > The Controller Node has 40 CPU cores. > > Can we assign more CPU resources to cinder-backup ? > > > > > > > > > > > > > > [image: cid:image007.jpg at 01D1747D.DB260110] > > *Keynes Lee **李* *俊* *賢* > > Direct: > > +886-2-6612-1025 > > Mobile: > > +886-9-1882-3787 > > Fax: > > +886-2-6612-1991 > > > > E-Mail: > > keynes_lee at wistron.com > > > > > > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > *This email contains confidential or legally privileged information and is > for the sole use of its intended recipient. * > > *Any unauthorized review, use, copying or distribution of this email or > the content of this email is strictly prohibited.* > > *If you are not the intended recipient, you may reply to the sender and > should delete this e-mail immediately.* > > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5209 bytes Desc: not available URL: From abishop at redhat.com Fri Jul 6 14:33:49 2018 From: abishop at redhat.com (Alan Bishop) Date: Fri, 6 Jul 2018 10:33:49 -0400 Subject: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources In-Reply-To: References: Message-ID: On Fri, Jul 6, 2018 at 10:18 AM Amy Marrich wrote: > Hey, > > Forwarding to the Dev list as you may get a better response from there. > > Thanks, > > Amy (spotz) > > On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron < > Keynes_Lee at wistron.com> wrote: > >> Hi >> >> >> >> When making “cinder backup-create” >> >> We found the process “cinder-backup” use 100% util of 1 CPU core on an >> OpenStack Controller node. >> >> It not just causes a bad backup performance, also make the >> openstack-cinder-backup unstable. >> >> Especially when we make several backup at the same time. >> >> >> >> The Controller Node has 40 CPU cores. >> >> Can we assign more CPU resources to cinder-backup ? >> > This has been addressed in [1], but it may not be in the release you're using. [1] https://github.com/openstack/cinder/commit/373b52404151d80e83004a37d543f825846edea1 Alan > [image: cid:image007.jpg at 01D1747D.DB260110] >> >> *Keynes Lee **李* *俊* *賢* >> >> Direct: >> >> +886-2-6612-1025 >> >> Mobile: >> >> +886-9-1882-3787 >> >> Fax: >> >> +886-2-6612-1991 >> >> >> >> E-Mail: >> >> keynes_lee at wistron.com >> >> >> >> >> >> >> *---------------------------------------------------------------------------------------------------------------------------------------------------------------* >> >> *This email contains confidential or legally privileged information and >> is for the sole use of its intended recipient. * >> >> *Any unauthorized review, use, copying or distribution of this email or >> the content of this email is strictly prohibited.* >> >> *If you are not the intended recipient, you may reply to the sender and >> should delete this e-mail immediately.* >> >> >> *---------------------------------------------------------------------------------------------------------------------------------------------------------------* >> >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5209 bytes Desc: not available URL: From duncan.thomas at gmail.com Fri Jul 6 14:39:11 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 6 Jul 2018 15:39:11 +0100 Subject: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources In-Reply-To: References: Message-ID: You can run many c-bak processes on one node, which will get fed round robin, so you should see fairly linear speedup in the many backups case until you run out of CPUs. Parallelising a single backup was something I attempted, but python makes it extremely difficult so there's no useful implementation I'm aware of. On Fri, 6 Jul 2018, 3:18 pm Amy Marrich, wrote: > Hey, > > Forwarding to the Dev list as you may get a better response from there. > > Thanks, > > Amy (spotz) > > On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron < > Keynes_Lee at wistron.com> wrote: > >> Hi >> >> >> >> When making “cinder backup-create” >> >> We found the process “cinder-backup” use 100% util of 1 CPU core on an >> OpenStack Controller node. >> >> It not just causes a bad backup performance, also make the >> openstack-cinder-backup unstable. >> >> Especially when we make several backup at the same time. >> >> >> >> The Controller Node has 40 CPU cores. >> >> Can we assign more CPU resources to cinder-backup ? >> >> >> >> >> >> >> >> >> >> >> >> >> >> [image: cid:image007.jpg at 01D1747D.DB260110] >> >> *Keynes Lee **李* *俊* *賢* >> >> Direct: >> >> +886-2-6612-1025 >> >> Mobile: >> >> +886-9-1882-3787 >> >> Fax: >> >> +886-2-6612-1991 >> >> >> >> E-Mail: >> >> keynes_lee at wistron.com >> >> >> >> >> >> >> *---------------------------------------------------------------------------------------------------------------------------------------------------------------* >> >> *This email contains confidential or legally privileged information and >> is for the sole use of its intended recipient. * >> >> *Any unauthorized review, use, copying or distribution of this email or >> the content of this email is strictly prohibited.* >> >> *If you are not the intended recipient, you may reply to the sender and >> should delete this e-mail immediately.* >> >> >> *---------------------------------------------------------------------------------------------------------------------------------------------------------------* >> >> _______________________________________________ >> Community mailing list >> Community at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/community >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5209 bytes Desc: not available URL: From jungleboyj at gmail.com Fri Jul 6 14:42:48 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 6 Jul 2018 09:42:48 -0500 Subject: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources In-Reply-To: References: Message-ID: <198596aa-f60c-df2a-242c-380bf11d106f@gmail.com> On 7/6/2018 9:33 AM, Alan Bishop wrote: > > On Fri, Jul 6, 2018 at 10:18 AM Amy Marrich > wrote: > > Hey, > > Forwarding to the Dev list as you may get a better response from > there. > > Thanks, > > Amy (spotz) > > On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron > > wrote: > > Hi > > When making “cinder backup-create” > > We found the process “cinder-backup” use 100% util of 1 CPU > core on an OpenStack Controller node. > > It not just causes a bad backup performance, also make the > openstack-cinder-backup unstable. > > Especially when we make several backup at the same time. > > The Controller Node has 40 CPU cores. > > Can we assign more CPU resources to cinder-backup ? > > > This has been addressed in [1], but it may not be in the release > you're using. > > [1] > https://github.com/openstack/cinder/commit/373b52404151d80e83004a37d543f825846edea1 > In addition to the change above we also have https://review.openstack.org/#/c/537003/ which should also help with the stability issues.  That has been backported as far as Pike. The change for multiple processes is only in master for the Rocky release right now. Jay > Alan > > cid:image007.jpg at 01D1747D.DB260110 > > > > *Keynes  Lee **李****俊****賢* > > Direct: > > > > +886-2-6612-1025 > > Mobile: > > > > +886-9-1882-3787 > > Fax: > > > > +886-2-6612-1991 > > > > E-Mail: > > > > keynes_lee at wistron.com > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > *This email contains confidential or legally privileged > information and is for the sole use of its intended recipient. * > > *Any unauthorized review, use, copying or distribution of this > email or the content of this email is strictly prohibited.* > > *If you are not the intended recipient, you may reply to the > sender and should delete this e-mail immediately.* > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > > _______________________________________________ > Community mailing list > Community at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Jul 6 15:37:25 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Jul 2018 10:37:25 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C144EA2@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> <80d67ca3-81ef-875a-7ccb-2afad8913fb1@redhat.com> <67f015ab-b181-3ff0-4e4b-c30c503e1268@gmail.com> <1A3C52DFCD06494D8528644858247BF01C1432C5@EX10MBOX03.pnnl.gov> <8c967664-76a5-8dde-6c3b-80801641eb9c@redhat.com> <1A3C52DFCD06494D8528644858247BF01C144D40@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C144EA2@EX10MBOX03.pnnl.gov> Message-ID: <076bf5fe-24a9-91ab-7cc0-d65eb8a42802@nemebean.com> Red Hat OpenStack is based on RDO. It's not pretty far from it, it's very close. It's basically productized RDO, and in the interest of everyone's sanity we try to keep the downstream patches to a minimum. In general I would be careful trying to take the distro analogy too far though. The release cycles of the Red Hat Linux distros are very different from that of the OpenStack distros. RDO would be more akin to CentOS in terms of how closely related they are, but the relationship is inverted. CentOS is taking the RHEL source (which is based on whatever the current Fedora release is when a new major RHEL version gets branched) and distributing packages based on it, while RHOS is taking the RDO bits and productizing them. There's no point in having a CentOS-like distro that then repackages the RHOS source because you'd end up with essentially RDO again. RDO and RHOS don't diverge the way Fedora and RHEL do after they are branched because they're on the same release cycle. So essentially the flow with the Linux distros looks like: Upstream->Fedora->RHEL->CentOS Whereas the OpenStack distros are: Upstream->RDO->RHOS With RDO serving the purpose of both Fedora and CentOS. As for TripleO, it's been integrated with RHOS/RDO since Kilo, and I believe it has been the recommended way to deploy in production since then as well. -Ben On 07/05/2018 03:17 PM, Fox, Kevin M wrote: > I use RDO in production. Its pretty far from RedHat OpenStack. though > its been a while since I tried the TripleO part of RDO. Is it pretty > well integrated now? Similar to RedHat OpenStack? or is it more Fedora > like then CentOS like? > > Thanks, > Kevin > ------------------------------------------------------------------------ > *From:* Dmitry Tantsur [dtantsur at redhat.com] > *Sent:* Thursday, July 05, 2018 11:17 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [tc] [all] TC Report 18-26 > > > > > On Thu, Jul 5, 2018, 19:31 Fox, Kevin M > wrote: > > We're pretty far into a tangent... > > /me shrugs. I've done it. It can work. > > Some things your right. deploying k8s is more work then deploying > ansible. But what I said depends on context. If your goal is to > deploy k8s/manage k8s then having to learn how to use k8s is not a > big ask. adding a different tool such as ansible is an extra > cognitive dependency. Deploying k8s doesn't need a general solution > to deploying generic base OS's. Just enough OS to deploy K8s and > then deploy everything on top in containers. Deploying a seed k8s > with minikube is pretty trivial. I'm not suggesting a solution here > to provide generic provisioning to every use case in the datacenter. > But enough to get a k8s based cluster up and self hosted enough > where you could launch other provisioning/management tools in that > same cluster, if you need that. It provides a solid base for the > datacenter on which you can easily add the services you need for > dealing with everything. > > All of the microservices I mentioned can be wrapped up in a single > helm chart and deployed with a single helm install command. > > I don't have permission to release anything at the moment, so I > can't prove anything right now. So, take my advice with a grain of > salt. :) > > Switching gears, you said why would users use lfs when they can use > a distro, so why use openstack without a distro. I'd say, today > unless you are paying a lot, there isn't really an equivalent distro > that isn't almost as much effort as lfs when you consider day2 ops. > To compare with Redhat again, we have a RHEL (redhat openstack), and > Rawhide (devstack) but no equivalent of CentOS. Though I think > TripleO has been making progress on this front... > > > It's RDO what you're looking for (equivalent of centos). TripleO is an > installer project, not a distribution. > > > Anyway. This thread is I think 2 tangents away from the original > topic now. If folks are interested in continuing this discussion, > lets open a new thread. > > Thanks, > Kevin > > ________________________________________ > From: Dmitry Tantsur [dtantsur at redhat.com ] > Sent: Wednesday, July 04, 2018 4:24 AM > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > Tried hard to avoid this thread, but this message is so much wrong.. > > On 07/03/2018 09:48 PM, Fox, Kevin M wrote: > > I don't dispute trivial, but a self hosting k8s on bare metal is > not incredibly hard. In fact, it is easier then you might think. k8s > is a platform for deploying/managing services. Guess what you need > to provision bare metal? Just a few microservices. A dhcp service. > dhcpd in a daemonset works well. some pxe infrastructure. pixiecore > with a simple http backend works pretty well in practice. a service > to provide installation instructions. nginx server handing out > kickstart files for example. and a place to fetch rpms from in case > you don't have internet access or want to ensure uniformity. nginx > server with a mirror yum repo. Its even possible to seed it on > minikube and sluff it off to its own cluster. > > > > The main hard part about it is currently no one is shipping a > reference implementation of the above. That may change... > > > > It is certainly much much easier then deploying enough OpenStack > to get a self hosting ironic working. > > Side note: no, it's not. What you describe is similarly hard to > installing > standalone ironic from scratch and much harder than using bifrost for > everything. Especially when you try to do it in production. > Especially with > unusual operating requirements ("no TFTP servers on my network"). > > Also, sorry, I cannot resist: > "Guess what you need to orchestrate containers? Just a few things. A > container > runtime. Docker works well. some remove execution tooling. ansible > works pretty > well in practice. It is certainly much much easier then deploying > enough k8s to > get a self hosting containers orchestration working." > > Such oversimplications won't bring us anywhere. Sometimes things are > hard > because they ARE hard. Where are people complaining that installing > a full > GNU/Linux distributions from upstream tarballs is hard? How many > operators here > use LFS as their distro? If we are okay with using a distro for > GNU/Linux, why > using a distro for OpenStack causes so much contention? > > > > > Thanks, > > Kevin > > > > ________________________________________ > > From: Jay Pipes [jaypipes at gmail.com ] > > Sent: Tuesday, July 03, 2018 10:06 AM > > To: openstack-dev at lists.openstack.org > > > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 > > > > On 07/02/2018 03:31 PM, Zane Bitter wrote: > >> On 28/06/18 15:09, Fox, Kevin M wrote: > >>>    * made the barrier to testing/development as low as 'curl > >>> http://......minikube; minikube start' (this spurs adoption and > >>> contribution) > >> > >> That's not so different from devstack though. > >> > >>>    * not having large silo's in deployment projects allowed better > >>> communication on common tooling. > >>>    * Operator focused architecture, not project based architecture. > >>> This simplifies the deployment situation greatly. > >>>    * try whenever possible to focus on just the commons and > push vendor > >>> specific needs to plugins so vendors can deal with vendor issues > >>> directly and not corrupt the core. > >> > >> I agree with all of those, but to be fair to OpenStack, you're > leaving > >> out arguably the most important one: > >> > >>       * Installation instructions start with "assume a working > datacenter" > >> > >> They have that luxury; we do not. (To be clear, they are 100% > right to > >> take full advantage of that luxury. Although if there are still > folks > >> who go around saying that it's a trivial problem and > OpenStackers must > >> all be idiots for making it look so difficult, they should > really stop > >> embarrassing themselves.) > > > > This. > > > > There is nothing trivial about the creation of a working > datacenter -- > > never mind a *well-running* datacenter. Comparing Kubernetes to > > OpenStack -- particular OpenStack's lower levels -- is missing this > > fundamental point and ends up comparing apples to oranges. > > > > Best, > > -jay > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lhinds at redhat.com Fri Jul 6 15:56:58 2018 From: lhinds at redhat.com (Luke Hinds) Date: Fri, 6 Jul 2018 16:56:58 +0100 Subject: [openstack-dev] [cinder][security][api-wg] Adding http security headers In-Reply-To: <1530811036-sup-6615@lrrr.local> References: <1530811036-sup-6615@lrrr.local> Message-ID: On Thu, Jul 5, 2018 at 6:17 PM, Doug Hellmann wrote: > Excerpts from Jim Rollenhagen's message of 2018-07-05 12:53:34 -0400: > > On Thu, Jul 5, 2018 at 12:40 PM, Nishant Kumar E < > > nishant.e.kumar at ericsson.com> wrote: > > > > > Hi, > > > > > > > > > > > > I have registered a blueprint for adding http security headers - > > > https://blueprints.launchpad.net/cinder/+spec/http-security-headers > > > > > > > > > > > > Reason for introducing this change - I work for AT&T cloud project – > > > Network Cloud (Earlier known as AT&T integrated Cloud). As part of > working > > > there we have introduced this change within all the services as kind > of a > > > downstream change but would like to see it a part of upstream > community. > > > While we did not face any major threats without this change but during > our > > > investigation process we found that if dealing with web services we > should > > > maximize the security as much as possible and came up with a list of > HTTP > > > security headers that we should include as part of the OpenStack > services. > > > I would like to introduce this change as part of cinder to start off > and > > > then propagate this to all the services. > > > > > > > > > > > > Some reference links which might give more insight into this: > > > > > > - https://www.owasp.org/index.php/OWASP_Secure_Headers_ > > > Project#tab=Headers > > > - https://www.keycdn.com/blog/http-security-headers/ > > > - https://securityintelligence.com/an-introduction-to-http- > > > response-headers-for-security/ > > > > > > Please let me know if this looks good and whether it can be included as > > > part of Cinder followed by other services. More details on how the > > > implementation will be done is mentioned as part of the blueprint but > any > > > better ideas for implementation is welcomed too !! > > > > > > > Wouldn't this be a job for the HTTP server in front of cinder (or > whatever > > service)? Especially "Strict-Transport-Security" as one shouldn't be > > enabling that without ensuring a correct TLS config. > > > > Bonus points in that upstream wouldn't need any changes, and we won't > need > > to change every project. :) > > > > // jim > > Yes, this feels very much like something the deployment tools should > do when they set up Apache or uWSGI or whatever service is in front > of each API WSGI service. > > Doug > > I agree, this should all be set within an installer, rather then the base project itself. Horizon (or rather django) has directives to enable many of the common security header fields, but rather than set these directly in horizons local_settings, we patched the openstack puppet-horizon module. Take for the following for example around X-Frame disabling: https://github.com/openstack/puppet-horizon/blob/218c35ea7bc08dd88d936ab79b14e5ce2b94ea44/releasenotes/notes/disallow_iframe_embed-f0ffa1cabeca5b1e.yaml#L2 The same approach should be used elsewhere, with whatever the preferred deployment tool is (puppet, chef, ansible etc). That way if a decision is made to roll out out TLS then can also toggle in certificate pinning etc in the same tool flow. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Jul 6 16:02:57 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Jul 2018 11:02:57 -0500 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> Message-ID: <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> On 07/05/2018 01:23 PM, Dan Prince wrote: > On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >> >> I would almost rather see us organize the directories by service >> name/project instead of implementation. >> >> Instead of: >> >> puppet/services/nova-api.yaml >> puppet/services/nova-conductor.yaml >> docker/services/nova-api.yaml >> docker/services/nova-conductor.yaml >> >> We'd have: >> >> services/nova/nova-api-puppet.yaml >> services/nova/nova-conductor-puppet.yaml >> services/nova/nova-api-docker.yaml >> services/nova/nova-conductor-docker.yaml >> >> (or perhaps even another level of directories to indicate >> puppet/docker/ansible?) > > I'd be open to this but doing changes on this scale is a much larger > developer and user impact than what I was thinking we would be willing > to entertain for the issue that caused me to bring this up (i.e. how to > identify services which get configured by Ansible). > > Its also worth noting that many projects keep these sorts of things in > different repos too. Like Kolla fully separates kolla-ansible and > kolla-kubernetes as they are quite divergent. We have been able to > preserve some of our common service architectures but as things move > towards kubernetes we may which to change things structurally a bit > too. True, but the current directory layout was from back when we intended to support multiple deployment tools in parallel (originally tripleo-image-elements and puppet). Since I think it has become clear that it's impractical to maintain two different technologies to do essentially the same thing I'm not sure there's a need for it now. It's also worth noting that kolla-kubernetes basically died because there wasn't enough people to maintain both deployment methods, so we're not the only ones who have found that to be true. If/when we move to kubernetes I would anticipate it going like the initial containers work did - development for a couple of cycles, then a switch to the new thing and deprecation of the old thing, then removal of support for the old thing. That being said, because of the fact that the service yamls are essentially an API for TripleO because they're referenced in user resource registries, I'm not sure it's worth the churn to move everything either. I think that's going to be an issue either way though, it's just a question of the scope. _Something_ is going to move around no matter how we reorganize so it's a problem that needs to be addressed anyway. -Ben From hamzy at us.ibm.com Fri Jul 6 16:22:27 2018 From: hamzy at us.ibm.com (Mark Hamzy) Date: Fri, 6 Jul 2018 11:22:27 -0500 Subject: [openstack-dev] [tripleo] What is the proper way to use NetConfigDataLookup? Message-ID: What is the proper way to use NetConfigDataLookup? I tried the following: (undercloud) [stack at oscloud5 ~]$ cat << '__EOF__' > ~/templates/mapping-info.yaml parameter_defaults: NetConfigDataLookup: control1: nic1: '5c:f3:fc:36:dd:68' nic2: '5c:f3:fc:36:dd:6c' nic3: '6c:ae:8b:29:27:fa' # 9.114.219.34 nic4: '6c:ae:8b:29:27:fb' # 9.114.118.??? nic5: '6c:ae:8b:29:27:fc' nic6: '6c:ae:8b:29:27:fd' compute1: nic1: '6c:ae:8b:25:34:ea' # 9.114.219.44 nic2: '6c:ae:8b:25:34:eb' nic3: '6c:ae:8b:25:34:ec' # 9.114.118.??? nic4: '6c:ae:8b:25:34:ed' compute2: nic1: '00:0a:f7:73:3c:c0' nic2: '00:0a:f7:73:3c:c1' nic3: '00:0a:f7:73:3c:c2' # 9.114.118.156 nic4: '00:0a:f7:73:3c:c3' # 9.114.112.??? nic5: '00:0a:f7:73:73:f4' nic6: '00:0a:f7:73:73:f5' nic7: '00:0a:f7:73:73:f6' # 9.114.219.134 nic8: '00:0a:f7:73:73:f7' __EOF__ (undercloud) [stack at oscloud5 ~]$ openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e ~/templates/mapping-info.yaml -e ~/templates/overcloud_images.yaml -e ~/templates/environments/network-environment.yaml -e ~/templates/environments/network-isolation.yaml -e ~/templates/environments/config-debug.yaml --disable-validations --ntp-server pool.ntp.org --control-scale 1 --compute-scale But I did not see a /etc/os-net-config/mapping.yaml get created. Also is this configuration used when the system boots IronicPythonAgent to provision the disk? -- Mark You must be the change you wish to see in the world. -- Mahatma Gandhi Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Fri Jul 6 16:58:02 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Jul 2018 12:58:02 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> Message-ID: <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> On 02/07/18 19:13, Jay Pipes wrote: >>> Also note that when I've said that *OpenStack* should have a smaller >>> mission and scope, that doesn't mean that higher-level services >>> aren't necessary or wanted. >> >> Thank you for saying this, and could I please ask you to repeat this >> disclaimer whenever you talk about a smaller scope for OpenStack. > > Yes. I shall shout it from the highest mountains. [1] Thanks. Appreciate it :) > [1] I live in Florida, though, which has no mountains. But, when I > visit, say, North Carolina, I shall certainly shout it from their > mountains. That's where I live, so I'll keep an eye out for you if I hear shouting. >> Because for those of us working on higher-level services it feels like >> there has been a non-stop chorus (both inside and outside the project) >> of people wanting to redefine OpenStack as something that doesn't >> include us. > > I've said in the past (on Twitter, can't find the link right now, but > it's out there somewhere) something to the effect of "at some point, > someone just needs to come out and say that OpenStack is, at its core, > Nova, Neutron, Keystone, Glance and Cinder". https://twitter.com/jaypipes/status/875377520224460800 for anyone who was curious. Interestingly, that and my equally off-the-cuff reply https://twitter.com/zerobanana/status/875559517731381249 are actually pretty close to the minimal descriptions of the two broad camps we were talking about in the technical vision etherpad. (Noting for the record that cdent disputes that views can be distilled into two camps.) > Perhaps this is what you were recollecting. I would use a different > phrase nowadays to describe what I was thinking with the above. I don't think I was recalling anything in particular that *you* had said. Complaining about the non-core projects (presumably on the logic that if we kicked them out of OpenStack all their developers would instead go to work on radically simplifying the remaining projects instead?) was a widespread popular pastime for at least roughly the 4 years from 2013-2016. > I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are > a definitive lower level of an OpenStack deployment. They represent a > set of required integrated services that supply the most basic > infrastructure for datacenter resource management when deploying > OpenStack." > > Note the difference in wording. Instead of saying "OpenStack is X", I'm > saying "These particular services represent a specific layer of an > OpenStack deployment". OK great. So this is wrong :) and I will attempt to explain why I think that in a second. But first I want to acknowledge what is attractive about this viewpoint (even to me). This is a genuinely useful observation that leads to a real insight. The insight, I think, is the same one we all just agreed on in another part of the thread: OpenStack is the only open source project concentrating on the gap between a rack full of unconfigured equipment and somewhere that you could, say, install Kubernetes. We write the bit where the rubber meets the road, and if we don't get it done there's nobody else to do it! There's an almost infinite variety of different applications and they'll all need different parts of the higher layers, but ultimately they'll all need to be reified in a physical data center and when they do, we'll be there: that's the core of what we're building. It's honestly only the tiniest of leaps from seeing that idea as attractive, useful, and genuinely insightful to seeing it as correct, and I don't really blame anybody who made that leap. I'm going to gloss over the fact that we punted the actual process of setting up the data center to a bunch of what turned out to be vendor-specific installer projects that you suggest should be punted out of OpenStack altogether, because that isn't the biggest problem I have with this view. Back in the '70s there was this idea about AI: even a 2 year old human can e.g. recognise images with a high degree of accuracy, but doing e.g. calculus is extremely hard in comparison and takes years of training. But computers can already do calculus! Ergo, we've solved the hardest part already and building the rest out of that will be trivial, AGI is just around the corner, &c. &c. (I believe I cribbed this explanation from an outdated memory of Marvin Minsky's 1982 paper "Why People Think Computers Can't" - specifically the section "Could a Computer Have Common Sense?" - so that's a better source if you actually want to learn something about AI.) The popularity of this idea arguably helped created the AI bubble, and the inevitable collision with the reality of its fundamental wrongness led to the AI Winter. Because in fact just because you can build logic out of many layers of heuristics (as human brains do), it absolutely does not follow that it's trivial to build other things that also require many layers of heuristics once you have some basic logic building blocks. (This is my conclusion, not Minsky's, and probably more influenced by reading summaries of Kahneman. But suffice to say the AI technology of the present, which is showing more promise, is called Deep Learning because it consists literally of many layers of heuristics. It's also still considerably worse at it than any 2 year old human.) I see the problem with the OpenStack-as-layers model as being analogous. (I don't think there's going to be a full-on OpenStack Winter, but we've certainly hit the Trough of Disillusionment.) With Nova, Neutron, Cinder, Keystone and Glance you can build a pretty good VPS hosting service. But it's a mistake to think that cloud is something you get by layering stuff on top of VPS hosting. It's comparatively easy to build a VPS on top of a cloud, just like teaching a child arithmetic. But it's enormously difficult to build a cloud on top of VPS (it would involve a lot of wasteful layers of abstraction, similar to building artificial neurons in software). Speaking of abstraction, let's try to pull this back to something concrete. Kubernetes is event-driven at a very fundamental level: when a pod dies, k8s gets a notification immediately and that prompts it to reschedule the workload. In contrast, Nova/Cinder/&c. is a black hole. You can't even build a sane dashboard for your VPS - let alone cloud-style orchestration - over it, because they have to spend all their time polling the API to find out if anything happened. There's an entire separate project (Masakari) that ~nobody has installed, basically dedicated to spelunking in the compute node without Nova's knowledge to try to surface this information. I am definitely not disrespecting the Masakari team, who are doing something that desperately needs doing in the only way that's really open to them, but that's an embarrassingly bad architecture for OpenStack as a whole. So yeah, it's sometimes helpful to think about the fact that there's a group of components that own the low level interaction with outside systems (hardware, or IdM in the case of Keystone), and that almost every application will end up touching those directly or indirectly, while each using different subsets of the other functionality... *but* only in the awareness that those things also need to be built from the ground up to occupy a space in a larger puzzle. When folks say stuff like these projects represent a "definitive lower level of an OpenStack deployment" they invite the listener to ignore the bigger picture; to imagine that if those lower level services just take care of their own needs then everything else can just build on top. That's a mistake, unless you believe (and I know *you* don't believe this Jay) that OpenStack needs only to provide enough building blocks to build VPS hosting out of, because support for all of those higher-level things doesn't just fall out like that. You have to consciously work at it. Imagine for a moment that, knowing everything we know now, we had designed OpenStack around a system of event sources and sinks that's reliable in the face of network partitions &c., with components connecting into it to provide services to the user and to each other. That's what Kubernetes did. That's the key to its success. We need to do enable something similar, because OpenStack is still necessary for all of the reasons above and more. In particular, I think one place where OpenStack provides value is that we are less opinionated and can allow application developers to choose how the event sources and sinks are connected together. That means that users can e.g. customise their own failovers in 'userspace' rather than the more one-size-fits-all approach of handling everything automatically inside k8s. This is theoretically the advantage of having separate projects instead of a monolithic design, and one reason why I don't think that destroying all of the boundaries between projects is the way forward for OpenStack. (I do still think it'd be a great thing for the compute node, which is entirely internal to OpenStack and definitely does not benefit from fragmentation.) > Nowadays, I would further add something to the effect of "Depending on > the particular use cases and workloads the OpenStack deployer wishes to > promote, an additional layer of services provides workload orchestration > and workflow management capabilities. This layer of services include > Heat, Mistral, Tacker, Service Function Chaining, Murano, etc". That makes sense, but the key point I want to make is that you can't (usefully) provide the porcelain unless the plumbing for it is in place. Right now information only flows one way - we have drains connected to the porcelain but no running water. Application developers are fetching water in buckets and heating it over an open fire medieval-style, while there are still (some) people who go around saying 'we have too much porcelain, we should just concentrate on making better drains'. Somebody dare me to stretch this metaphor even further. > Does that provide you with some closure on this feeling of "non-stop > chorus" of exclusion that you mentioned above? I'm never letting this go ;) >> The reason I haven't dropped this discussion is because I really want >> to know if _all_ of those people were actually talking about something >> else (e.g. a smaller scope for Nova), or if it's just you. Because you >> and I are in complete agreement that Nova has grown a lot of obscure >> capabilities that make it fiendishly difficult to maintain, and that >> in many cases might never have been requested if we'd had higher-level >> tools that could meet the same use cases by composing simpler operations. >> >> IMHO some of the contributing factors to that were: >> >> * The aforementioned hostility from some quarters to the existence of >> higher-level projects in OpenStack. >> * The ongoing hostility of operators to deploying any projects outside >> of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in >> the Barbican vs. Castellan debate, where we can't even correct one of >> OpenStack's original sins and bake in a secret store - something k8s >> managed from day one - because people don't want to install another >> ReST API even over a backend that they'll already have to install >> anyway). >> * The illegibility of public Nova interfaces to potential higher-level >> tools. > > I would like to point something else out here. Something that may not be > pleasant to confront. > > Heat's competition (for resources and mindshare) is Kubernetes, plain > and simple. For resources, that's undoubtedly true. For mindshare, that seems a bit like saying "Horses' competition for mindshare is cars". I mean, yes, but the _competition_ part was over a while back, cars won, and horses now fulfil a niche role. That's actually OK by me. When we first started Heat, it was a project to make *OpenStack* resources orchestratable. Once it was up and running, a bunch of people came to us (at the Havana summit in Portland in early 2013) and said that we needed to build a software orchestration system. Personally, I was pretty reluctant at first. Eventually they convinced me. But in retrospect while they were right about the fact that we needed better ways to deploy software via Heat than to bake it into the image and pass minimal configuration in the user_data (and Heat's Software Deployments delivered those improvements), the thing they really needed was Kubernetes. Once that existed those folks melted away from the Heat community, and that's not a terrible outcome. Turning Heat into k8s would be hard and distracting; it's better to integrate the two together so people can get the all functionality they need from the projects in the best position to provide it. There are still plenty of folks who need to do orchestration across all of their virtual infrastructure, and Heat is here to meet their needs. The project was always about trying to make OpenStack better and more consumable for a certain audience, and users tell us it has succeeded at that. > Heat's competition is not other OpenStack projects. In practical terms, Heat's competition is Horizon, shell scripts and apathy. Not necessarily in that order. Arguably Ansible as well, but mostly because we don't have any real integration with it, so people are sometimes forced to pick one or the other when they need both. > Nova's competition is not Kubernetes (despite various people continuing > to say that it is). > > Nova is not an orchestration system. Never was and (as long as I'm > kicking and screaming) never will be. > > Nova's primary competition is: > > * Stand-alone Ironic > * oVirt and stand-alone virsh callers > * Parts of VMWare vCenter [3] > * MaaS in some respects Do you see KubeVirt or Kata or Virtlet or RancherVM ending up on this list at any point? Because a lot of people* do. And Nova is absolutely competing for resources with those projects. Having your VM provisioning thing embedded in the user's orchestration system of choice is a serious competitive advantage. (BTW I'm currently trying to save your bacon here: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131183.html) * https://news.ycombinator.com/item?id=17013779 > * The *compute provisioning* parts of EC2, Azure, and GCP I agree this is true in practice, but would like to note that the compute provisioning parts of those services are tied in to the rest of the cloud in ways that Nova is not tied in to the rest of OpenStack, and that is a *major* missed opportunity because it largely limits our market to the subset of people who need only the compute provisioning bits. We chose to add features to Nova to compete with vCenter/oVirt, and not to add features the would have enabled OpenStack as a whole to compete with more than just the compute provisioning subset of EC2/Azure/GCP. Meanwhile, the other projects in OpenStack were working on building the other parts of an AWS/Azure/GCP competitor. And our vague one-sentence mission statement allowed us all to maintain the delusion that we were all working on the same thing and pulling in the same direction, when in truth we haven't been at all. We can decide that we want to be one, or the other, or both. But if we don't all decide together then a lot of us are going to continue wasting our time working at cross-purposes. > This is why there is a Kubernetes OpenStack cloud provider plugin [4]. > > This plugin uses Nova [5] (which can potentially use Ironic), Cinder, > Keystone and Neutron to deploy kubelets to act as nodes in a Kubernetes > cluster and load balancer objects to act as the proxies that k8s itself > uses when deploying Pods and Services. > > Heat's architecture, template language and object constructs are in > direct competition with Kubernetes' API and architecture, with the > primary difference being a VM-centric [6] vs. a container-centric object > model. Mmmm, I wouldn't really call Heat VM-centric. It's infrastructure-centric with a sideline in managing software, where K8s is software-centric. Here's a blog post I wrote from back when people thought Heat's competition was Puppet(!): https://www.zerobanana.com/archive/2014/05/08#heat-configuration-management It's aged pretty well except for the fact that k8s largely owns the 'Software Orchestration' space now. (Although, really, k8s itself doesn't do 'orchestration' as such. It just starts everything up, and the application does its own co-ordination using etcd. Helm does do orchestration in the traditional sense AIUI.) > Heat's template language is similar to Helm's chart template YAML > structure [7], and with Heat's evolution to the "convergence model", > Heat's architecture actually got closer to Kubernetes' architecture: > that of continually attempting to converge an observed state with a > desired state. It's important to note that that model change never happened, and likely never will. More specifically, the set of changes labelled 'convergence' can be grouped into three different buckets, only one of which exists: 1) Feed all resource actions into a task queue for workers to pick up, enabling Heat to scale out arbitrarily (limited only by the centralised DB); and allow users to update stacks without waiting for previous operations to complete. This absolutely happened, has been the default since Newton, is used even by TripleO since Queens, and is working great. This is what most people mean when they refer to 'convergence' now (for a while we used to call it 'phase 1'). 2) Update resources by comparing their observed state to the desired state and making an incremental change to converge them, then repeat. This can itself be divided into several different implementation phases, and the first one (comparing to observed rather than last-recorded state) actually sort-of exists as an experimental option. That said, this is probably never going to be completed for a number of reasons: lack of developers/reviewers; the need to write new resource implementations, thus throwing away years worth of corner-case fixes and resulting stability; and an inability to get events in an efficient (i.e. filtered at source) and reliable way. 3) Doing this constantly all the time ('continuous convergence') even when a stack update is not in progress. We agreed not to ever do this because I argued that the user needed control over the process - it's enough that Heat could recognise that something had changed and fix it during a stack update (bucket #2), after that it's better to let the application decide when to run a stack update, either on a timer or in response to events (probably via Mistral in both cases). Maybe if we could (efficiently) get event notifications for everything it might be a different story, but there's no way we can justify constant polling of every resource in every stack whether the user needs it or not. > So, what is Heat to do? One thing we need to do is to integrate with other ways of deploying software (especially Kubernetes and Ansible), to build better bridges between the infrastructure and the software running on it. The challenging part is authenticating to those other systems. Unfortunately Keystone has never become a standard way of authenticating services outside the immediate OpenStack ecosystem. One option I want to explore more is just having the user put credentials for those other systems into Barbican. That's not an especially elegant solution, and it requires operators to actually install Barbican, but it least it's something and Heat wouldn't have to store the user's credentials itself. We're working on adding support for creating stacks in remote OpenStack cloud using this method, so that should help provide a model we can reuse. > The hype and marketing machine is never-ending, I'm afraid. [8] > > I'm not sure there's actually anything that can be done about this. > Perhaps it is a fait accomplis that Kubernetes/Helm will/has become > synonymous with "orchestration of things". Perhaps not. I'm not an > oracle, unfortunately. Me neither. There are even folks who think that the Zun model of container deployment is going to take over the world: https://medium.com/@steve.yegge/honestly-i-cant-stand-k8s-48c9a600e405 Who knows? He was right about Javascript. We're going to find out. > Maybe the only thing that Heat can do to fend off the coming doom is to > make a case that Heat's performance, reliability, feature set or > integration with OpenStack's other services make it a better candidate > for orchestrating virtual machine or baremetal workloads on an OpenStack > deployment than Kubernetes is. > > Sorry to be the bearer of bad news, I assure you that, contrary to popular opinion, I have not been living under a rock ;) cheers, Zane. From Luiz.Gavioli at netapp.com Fri Jul 6 17:08:08 2018 From: Luiz.Gavioli at netapp.com (Gavioli, Luiz) Date: Fri, 6 Jul 2018 17:08:08 +0000 Subject: [openstack-dev] Deprecation notice: Cinder Driver for NetApp E-Series Message-ID: <1530896888.7565.11.camel@netapp.com> Developers and Operators, NetApp’s various Cinder drivers currently provide platform integration for ONTAP powered systems, SolidFire, and E/EF-Series systems. Per systems-provided telemetry and discussion amongst our user community, we’ve learned that when E/EF-series systems are deployed with OpenStack they do not commonly make use of the platform specific Cinder driver (instead opting for use of the LVM driver or Ceph layered atop). Given that, we’re proposing to cease further development and maintenance of the E-Series drivers within OpenStack and will focus development on our widely used SolidFire and ONTAP options. In accordance with community policy [1], we are initiating the deprecation process for the NetApp E-Series drivers [2] set to conclude with their removal in the OpenStack Stein release. This will apply to both protocols currently supported in this driver: iSCSI and FC. What is being deprecated: Cinder drivers for NetApp E-Series Period of deprecation: E-Series drivers will be around in stable/rocky and will be removed in the Stein release (All milestones of this release) What should users/operators do: Any Cinder E-series deployers are encouraged to get in touch with NetApp via the community #openstack-netapp IRC channel on freenode or via the #OpenStack Slack channel on http://netapp.io. We encourage migration to the LVM driver for continued use of E-series systems in most cases via Cinder’s migrate facility [3]. [1] https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html [2] https://review.openstack.org/#/c/580679/ [3] https://docs.openstack.org/admin-guide/blockstorage-volume-migration.html Thanks, Luiz Gavioli -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Jul 6 17:29:46 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 6 Jul 2018 12:29:46 -0500 Subject: [openstack-dev] [cinder] Planning Etherpad for Denver 2018 PTG Message-ID: <80f839e7-36a4-55ca-7c01-9795e5fcf28a@gmail.com> All, I have created an etherpad to start planning for the Denver PTG in September. [1]  Please start adding topics to the etherpad. Look forward to seeing you all there! Jay (jungleboyj) [1] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 From openstack at nemebean.com Fri Jul 6 17:35:00 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Jul 2018 12:35:00 -0500 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> Message-ID: (adding the list back) On 07/06/2018 12:05 PM, Dan Prince wrote: > On Fri, Jul 6, 2018 at 12:03 PM Ben Nemec wrote: >> >> >> >> On 07/05/2018 01:23 PM, Dan Prince wrote: >>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>>> >>>> I would almost rather see us organize the directories by service >>>> name/project instead of implementation. >>>> >>>> Instead of: >>>> >>>> puppet/services/nova-api.yaml >>>> puppet/services/nova-conductor.yaml >>>> docker/services/nova-api.yaml >>>> docker/services/nova-conductor.yaml >>>> >>>> We'd have: >>>> >>>> services/nova/nova-api-puppet.yaml >>>> services/nova/nova-conductor-puppet.yaml >>>> services/nova/nova-api-docker.yaml >>>> services/nova/nova-conductor-docker.yaml >>>> >>>> (or perhaps even another level of directories to indicate >>>> puppet/docker/ansible?) >>> >>> I'd be open to this but doing changes on this scale is a much larger >>> developer and user impact than what I was thinking we would be willing >>> to entertain for the issue that caused me to bring this up (i.e. how to >>> identify services which get configured by Ansible). >>> >>> Its also worth noting that many projects keep these sorts of things in >>> different repos too. Like Kolla fully separates kolla-ansible and >>> kolla-kubernetes as they are quite divergent. We have been able to >>> preserve some of our common service architectures but as things move >>> towards kubernetes we may which to change things structurally a bit >>> too. >> >> True, but the current directory layout was from back when we intended to >> support multiple deployment tools in parallel (originally >> tripleo-image-elements and puppet). Since I think it has become clear >> that it's impractical to maintain two different technologies to do >> essentially the same thing I'm not sure there's a need for it now. It's >> also worth noting that kolla-kubernetes basically died because there >> wasn't enough people to maintain both deployment methods, so we're not >> the only ones who have found that to be true. If/when we move to >> kubernetes I would anticipate it going like the initial containers work >> did - development for a couple of cycles, then a switch to the new thing >> and deprecation of the old thing, then removal of support for the old thing. > > Sometimes the old things are a bit longer lived though. And sometimes > the new thing doesn't work out the way you thought they would. Have an > abstraction layer where you can have more than new/old things is > sometimes very useful. I'd had to see us ditch it. Especially since > you can already sort of have the both right now by using the resource > registry files to setup a nice default for everything and gradually > switch to new stuff as your defaults. I don't know that you lose that ability in either case though. You can still point your resource registry at the -puppet versions of the services if you want to do that. The only thing that changes is the location of the files. Given that, I don't think there's actually a _huge_ difference between the two options. I prefer the flat directory just because as I've been working on designate it's mildly annoying to have to navigate two separate directory trees to find all the designate-related service files, but I realize that's a fairly minor complaint. :-) > >> >> That being said, because of the fact that the service yamls are >> essentially an API for TripleO because they're referenced in user >> resource registries, I'm not sure it's worth the churn to move >> everything either. I think that's going to be an issue either way >> though, it's just a question of the scope. _Something_ is going to move >> around no matter how we reorganize so it's a problem that needs to be >> addressed anyway. > > I feel like renaming every service template in t-h-t as part of > solving my initial concerns around identifying the 'ansible configured > services' is a bit of a sedge hammer though. I like some of the > renaming ideas proposed here too. I'm just not convinced that renaming > *some* templates is the same as restructuring the entire t-h-t > services hierarchy. I'd rather wait and let it happen more naturally I > guess, perhaps when we need to do something more destructive already. My thought was that either way we're causing people grief because they have to update their files, but the big bang approach would mean they do it once and then it's done. Except I realize now that's not true, because as more things move to ansible the filenames would continue to change. Which makes me wonder if we should be encoding implementation details into the filenames in the first place. Ideally, the interface would be "I want designate-api, so I set OS::TripleO::Services::DesignateApi: services/designate-api.yaml". As a user I probably don't care what technology is used to deploy it, I just want it deployed. Then if/when we change our default method, it just gets swapped out seamlessly and there's no need for me to change my configuration. Obviously we'd still need the ability to have method-specific templates too, but maybe the default designate-api.yaml could be a symlink to whatever we consider the primary one. Not sure if that works with templates in swift though. Anyway, that's some spaghetti I threw at the wall. I don't know if any of it will stick. -Ben From jaypipes at gmail.com Fri Jul 6 20:31:09 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 6 Jul 2018 16:31:09 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> Message-ID: <64f4e107-ad1d-9657-f916-3d0f45c170a3@gmail.com> On 07/06/2018 12:58 PM, Zane Bitter wrote: > On 02/07/18 19:13, Jay Pipes wrote: >> Nova's primary competition is: >> >> * Stand-alone Ironic >> * oVirt and stand-alone virsh callers >> * Parts of VMWare vCenter [3] >> * MaaS in some respects > > Do you see KubeVirt or Kata or Virtlet or RancherVM ending up on this > list at any point? Because a lot of people* do. > > * https://news.ycombinator.com/item?id=17013779 Please don't lose credibility by saying "a lot of people" see things like RancherVM as competitors to Nova [1] by pointing to a HackerNews [2] thread where two people discuss why RancherVM exists and where one of those people is Darren Shepherd, a co-founder of Rancher, previously at Citrix and GoDaddy with a long-known distaste for all things OpenStack. I don't think that thread is particularly unbiased or helpful. I'll respond to the rest of your (excellent) points a little later... Best, -jay [1] Nova isn't actually mentioned there. "OpenStack" is. [2] I've often wondered who has time to actually respond to anything on HackerNews. Same for when Slashdot was a thing. In fact, now that I think about it, I spend entirely too much time worrying about all of this stuff... ;) From mriedemos at gmail.com Fri Jul 6 20:51:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Jul 2018 15:51:00 -0500 Subject: [openstack-dev] [nova] Supporting volume_type when booting from volume In-Reply-To: <5b73329b-b9db-8a2a-229b-a3d4b7224679@oracle.com> References: <07b85c76-2e55-5136-c6e6-b55a239258f1@gmail.com> <5b73329b-b9db-8a2a-229b-a3d4b7224679@oracle.com> Message-ID: <41363482-c5e5-08f6-d84f-7e180c7dbfc7@gmail.com> On 5/23/2017 7:12 PM, Michael Glasgow wrote: > > A slight disadvantage of this approach is that the resulting > incongruence between the client and the API is obfuscating.  When an end > user can make accurate inferences about the API based on how the client > works, that's a form of transparency that can pay dividends. > > Also in terms of the "slippery slope" that has been raised, putting > small bits of orchestration into the client creates a grey area there as > well:  how much is too much? > > OTOH I don't disagree with you.  This approach might be the best of > several not-so-great options, but I wish I could think of a better one. Just an FYI that this same 'pass volume type when booting from volume' request came up again today: https://review.openstack.org/#/c/579520/ We might want to talk about this again at the Stein PTG in September to see if the benefit of adding this for people outweighs the orchestration cost since it seems it's never going to go away and lots of deployments are already patching it in. -- Thanks, Matt From mriedemos at gmail.com Fri Jul 6 21:37:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Jul 2018 16:37:01 -0500 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> Message-ID: <703968fb-3a31-cf78-72ea-528b272f05d5@gmail.com> On 7/6/2018 6:28 AM, Kristi Nikolla wrote: > If the answer is 'no', can we find a process that gets us there? Or > are we doomed > by the inability to version the version document? We could always microversion the version document couldn't we? Not saying we want to, but it's an option right? -- Thanks, Matt From mriedemos at gmail.com Fri Jul 6 21:46:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Jul 2018 16:46:44 -0500 Subject: [openstack-dev] [nova]API update week 28-4 In-Reply-To: <16464fcd628.1238ed33e3912.7329087506324338162@ghanshyammann.com> References: <16464fcd628.1238ed33e3912.7329087506324338162@ghanshyammann.com> Message-ID: <10c6d50d-4bab-2a65-8714-c2ec1c530b1c@gmail.com> Thanks for sending out this update, it's useful for me when I'm not attending the office hours. On 7/4/2018 6:10 AM, Ghanshyam Mann wrote: > 1. Servers Ips non-unique network names : > -https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names > - Spec Update need another +2 -https://review.openstack.org/#/c/558125/ > -https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) > - Weekly Progress: On Hold. Waiting for spec update to merge first. I've poked dansmith to approve this. However, the author (or someone) should start working on the code changes since it's a pretty straight-forward change and we don't have much time left in the cycle for this. > > 2. Abort live migration in queued state: > -https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status > -https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) > - Weekly Progress: Code is up for review. No Review last week. > This is in the runways queue so it should be coming up in the next slot. > 3. Complex anti-affinity policies: > -https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies > -https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) > - Weekly Progress: Code is up for review. Few reviews done . This is currently in a runways slot and has had quite a bit of review this week. It's not going to be done this week but hopefully we can get it all merged next week (there aren't any major issues that I foresee at this point after having gone through the full series yesterday). > > 4. Volume multiattach enhancements: > -https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements > -https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) > - Weekly Progress: Waiting to hear from mriedem about his WIP on base patch -https://review.openstack.org/#/c/569649/3 Yeah this is on my TODO list. I was waiting for the spec to merge before starting in on the API changes, and have just been busy with other stuff. I'll try to get the API changes done next week. -- Thanks, Matt From mriedemos at gmail.com Fri Jul 6 21:51:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Jul 2018 16:51:02 -0500 Subject: [openstack-dev] [nova]API update week 28-4 In-Reply-To: <16464fcd628.1238ed33e3912.7329087506324338162@ghanshyammann.com> References: <16464fcd628.1238ed33e3912.7329087506324338162@ghanshyammann.com> Message-ID: <48b26f1e-e32d-71e2-6e3b-d2b47a1cadf6@gmail.com> On 7/4/2018 6:10 AM, Ghanshyam Mann wrote: > Planned Features : > ============== > Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad-https://etherpad.openstack.org/p/rocky-nova-priorities-tracking Oh yeah, getting agreement on the direction of the "handling a down cell" spec is going to be important and we don't have much time to get this done now either (~3 weeks). https://review.openstack.org/#/c/557369/ -- Thanks, Matt From kennelson11 at gmail.com Fri Jul 6 22:37:39 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 6 Jul 2018 15:37:39 -0700 Subject: [openstack-dev] [First Contact] [SIG] [PTL] Project Liaisons In-Reply-To: References: Message-ID: Hello again! I updated the Project Liaisons list [1] with PTL's I didn't hear from about delegating the duties to a different person. If you want to delegate this or add other people willing to be contacted by new contributors, please let me know and I would be happy to update the list :) It would also be nice to fill in timezones for new contributors looking over the list so they know when might be the best time to contact you. Thanks! -Kendall Nelson (diablo_rojo) [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons On Wed, Jun 6, 2018 at 3:00 PM Kendall Nelson wrote: > Hello! > > As you hopefully are aware the First Contact SIG strives to provide a > place for new contributors to come for information and advice. Part of this > is helping new contributors find more established contributors in the > community they can ask for help from. While the group of people involved in > the FC SIG is diverse in project knowledge, we don't have all of them > covered. > > Over the last year we have built a list of Project Liaisons to refer new > contributors to when the project they are interested in isn't one we know > well. Unfortunately, this list[1] isn't as filled out as we would like it > to be. > > So! Keeping with the conventions of other liaison roles, if there isn't > already a project liaison named, this role will default to the PTL unless > you respond to this thread with the individual you are delegating to :) Or > by adding them to the list in the wiki[1]. > > Essentially the duties of the liaison are just to be willing to help out > newcomers when a FC SIG member introduces you to them and to keep an eye > out for patches that come in to your project with the 'Welcome, new > contributor' bot message. Its likely you are doing this already, but to > have a defined list of people to refer to would be a huge help. > > Thank you! > > -Kendall Nelson (diablo_rojo) > > [1]https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sat Jul 7 00:54:20 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sat, 7 Jul 2018 12:54:20 +1200 Subject: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD? Message-ID: Hi Barbican guys, Currently, I am testing the integration between Barbican and SoftHSM v2 but I met with a problem that SoftHSM v2 doesn't support CKM_AES_CBC_PAD key wrapping operation which is hardcoded in Barbican code here https://github.com/openstack/barbican/blob/5dea5cec130b59ecfb8d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L496. After discussion with SoftHSM team, I was told SoftHSM does support other mechanisms such as CKM_AES_KEY_WRAP, CKM_AES_KEY_WRAP_PAD, CKM_RSA_PKCS, or CKM_RSA_PKCS_OAEP. My question is, is it easy to support other wrapping mechanisms in Barbican? Or if there is another workaround this problem? Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Sat Jul 7 01:23:08 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Jul 2018 21:23:08 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <480ac9d2-65c1-39ff-ec0d-bceade3e1def@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <1A3C52DFCD06494D8528644858247BF01C143256@EX10MBOX03.pnnl.gov> <480ac9d2-65c1-39ff-ec0d-bceade3e1def@gmail.com> Message-ID: <51332b6c-bdf5-bfce-d2e4-155703b6c0ad@redhat.com> I'm not Kevin but I think I can clarify some of these. On 03/07/18 16:04, Jay Pipes wrote: > On 07/03/2018 02:37 PM, Fox, Kevin M wrote: > So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware? > > "production VM" use case like oVirt or VMWare? I don't know what that means. You mean "a GUI-based VM management system"? Read 'pets'. >> While some people only ever consider running Kubernetes on top of a >> cloud, some of us realize maintaining both a cloud an a kubernetes is >> unnecessary and can greatly simplify things simply by running k8s on >> bare metal. This does then make it a competitor to Nova  as a platform >> for running workload on. > > What percentage of Kubernetes users deploy on baremetal (and continue to > deploy on baremetal in production as opposed to just toying around with > it)? At Red Hat Summit there was a demo of deploying OpenShift alongside (not on top of) OpenStack on bare metal using Director (downstream of TripleO - so managed by Ironic in an OpenStack undercloud). I don't know if people using Kubernetes directly on baremetal in production is widespread right now, but it's clear to me that it will be just around the corner. >> As k8s gains more multitenancy features, this trend will continue to >> grow I think. OpenStack needs to be ready for when that becomes a thing. > > OpenStack is already multi-tenant, being designed as such from day one. > With the exception of Ironic, which uses Nova to enable multi-tenancy. > > What specifically are you referring to "OpenStack needs to be ready"? > Also, what specific parts of OpenStack are you referring to there? I believe the point was: * OpenStack supports multitenancy. * Kubernetes does not support multitenancy. * Applications that require multitenancy currently require separate per-tenant deployments of Kubernetes; deploying on top of a cloud (such as OpenStack) makes this easier, so there is demand for OpenStack from people who need multitenancy even if they are mainly interacting with Kubernetes. Effectively OpenStack is the multitenancy layer for k8s in a lot of deployments. * One day Kubernetes will support multitenancy. * Then what? >> Think of OpenStack like a game console. The moment you make a component optional and make it takes extra effort to obtain, few software developers target it and rarely does anyone one buy the addons it because there isn't software for it. Right now, just about everything in OpenStack is an addon. Thats a problem. > > I don't have any game consoles nor do I develop software for them, Me neither, but much like OpenStack it's a two-sided marketplace (developers and users in the console case, operators and end-users in the OpenStack case), where you succeed or fail based on how much value you can get flowing between the two sides. There's a positive feedback loop between supply on one side and demand on the other, so like all positive feedback loops it's unstable and you have to find some way to bootstrap it in the right direction, which is hard. One way to make it much, much harder is to segment your market such a way that you give yourself a second feedback loop that you also have to bootstrap, that depends on the first one, and you only get to use a subset of your existing market participants to do it. As an example from my other reply, we're probably going to try to use Barbican to help integrate Heat with external tools like k8s and Ansible, but for that to have any impact we'll have to convince users that they want to do this badly enough that they'll convince their operators to deploy Barbican - and we'll likely have to do so before they've even tried it. That's even after we've already convinced them to use OpenStack and deploy Heat. If Barbican (and Heat) were available as part of every OpenStack deployment, then it'd just be a matter of convincing people to use the feature, which would already be available and which they could try out at any time. That's a much lower bar. I'm not defending "make it a monolith" as a solution, but Kevin is identifying a real problem. - ZB From colleen at gazlene.net Sat Jul 7 07:00:31 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Sat, 07 Jul 2018 09:00:31 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 2 July 2018 Message-ID: <1530946831.2624147.1432816736.706AB4E3@webmail.messagingengine.com> # Keystone Team Update - Week of 2 July 2018 ## News Fairly quiet week due to the holiday in the US. During the weekly meeting[1] we did some brainstorming about how to address the mutable config community goal[2], and extending oslo.policy types in order to be able to make finer-grained rules. [1] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-07-03-16.00.log.html [2] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 11 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 71 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Many of these are for the flask migration and for the hierarchical limits work, so please have a look. ## Bugs This week we opened 5 new bugs and also closed 5. Bugs opened (5) Bug #1779889 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1779889 Bug #1779903 (keystone:Undecided) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1779903 Bug #1780159 (keystone:Undecided) opened by mchlumsky https://bugs.launchpad.net/keystone/+bug/1780159 Bug #1780377 (keystone:Undecided) opened by Kristi Nikolla https://bugs.launchpad.net/keystone/+bug/1780377 Bug #1780503 (keystone:Undecided) opened by Gage Hugo https://bugs.launchpad.net/keystone/+bug/1780503 Bugs closed (2) Bug #1643301 (keystone:Wishlist) https://bugs.launchpad.net/keystone/+bug/1643301 Bug #1780159 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1780159 Bugs fixed (3) Bug #1711883 (keystone:Medium) fixed by Vishakha Agarwal https://bugs.launchpad.net/keystone/+bug/1711883 Bug #1777892 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1777892 Bug #1777893 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1777893 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Our feature freeze is scheduled for next week. There is still a lot of ongoing work that needs to be completed and reviewed. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From michel at redhat.com Sun Jul 8 07:56:02 2018 From: michel at redhat.com (Michel Peterson) Date: Sun, 8 Jul 2018 10:56:02 +0300 Subject: [openstack-dev] [infra] Greasemonkey script to see CI progress on OpenStack's Gerrit Message-ID: Hello everyone, I've created a greasemonkey script that will show you the status of the current CI run on the OpenStack Gerrit in real time. I put it together really quickly so there is room for improvement. You can find it here: https://gist.github.com/mpeterson/bb351543c4abcca8e7bb1205fcea4c75 I am wondering if this would be an interesting thing for you guys to add to Gerrit directly? I think it's very useful to have. Should I propose a patch to include it in review.o.o ? Best, M From fungi at yuggoth.org Sun Jul 8 14:00:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 8 Jul 2018 14:00:55 +0000 Subject: [openstack-dev] [infra] Greasemonkey script to see CI progress on OpenStack's Gerrit In-Reply-To: References: Message-ID: <20180708140055.jmqofqslenu3bpfa@yuggoth.org> On 2018-07-08 10:56:02 +0300 (+0300), Michel Peterson wrote: > I've created a greasemonkey script that will show you the status of > the current CI run on the OpenStack Gerrit in real time. [...] We've had something similar implemented in the hideci.js overlay for just over three years, since https://review.openstack.org/179361 merged. It's disabled by default but can be toggled with the zuul_inline bool. Chances are it will need some tweaking to support Zuul v3's API (I think work on that was waiting for reintroduction of the per-change status query method, but looks like it's finally landed). The only real reason we didn't leave it enabled years ago was that hundreds of developers with dozens of browser tabs open to different Gerrit changes all slamming the status API caused a really effective denial of service against Zuul. With the API rearchitecture in v3 that hopefully won't be an issue, but we have to go in assuming that it still might. > Should I propose a patch to include it in review.o.o ? I recommend trying out the one we have already, fixing it if needed, and then coordinating a time to merge a change switching that zuul_inline toggle to true. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From soulxu at gmail.com Mon Jul 9 01:35:46 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 9 Jul 2018 09:35:46 +0800 Subject: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http In-Reply-To: <703968fb-3a31-cf78-72ea-528b272f05d5@gmail.com> References: <00be01d41381$d37b2940$7a717bc0$@gmail.com> <0D8F95CB-0AAB-45FD-ADC8-3B917C1460D4@workday.com> <703968fb-3a31-cf78-72ea-528b272f05d5@gmail.com> Message-ID: The version API isn't protected by the microversion, since the version API is used to discover the microversion. 2018-07-07 5:37 GMT+08:00 Matt Riedemann : > On 7/6/2018 6:28 AM, Kristi Nikolla wrote: > >> If the answer is 'no', can we find a process that gets us there? Or >> are we doomed >> by the inability to version the version document? >> > > We could always microversion the version document couldn't we? Not saying > we want to, but it's an option right? > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skramaja at redhat.com Mon Jul 9 04:57:30 2018 From: skramaja at redhat.com (Saravanan KR) Date: Mon, 9 Jul 2018 10:27:30 +0530 Subject: [openstack-dev] [tripleo] What is the proper way to use NetConfigDataLookup? In-Reply-To: References: Message-ID: Are you using the first-boot script [1] mapped to NodeUserData? If yes, you could check the logs/error of the first-boot script @/var/log/cloud-init-output.log on the overcloud nodes. Regards, Saravanan KR [1] https://github.com/openstack/tripleo-heat-templates/blob/e64c10b9c13188f37e6f122475fe02280eaa6686/firstboot/os-net-config-mappings.yaml On Fri, Jul 6, 2018 at 9:53 PM Mark Hamzy wrote: > > What is the proper way to use NetConfigDataLookup? I tried the following: > > (undercloud) [stack at oscloud5 ~]$ cat << '__EOF__' > ~/templates/mapping-info.yaml > parameter_defaults: > NetConfigDataLookup: > control1: > nic1: '5c:f3:fc:36:dd:68' > nic2: '5c:f3:fc:36:dd:6c' > nic3: '6c:ae:8b:29:27:fa' # 9.114.219.34 > nic4: '6c:ae:8b:29:27:fb' # 9.114.118.??? > nic5: '6c:ae:8b:29:27:fc' > nic6: '6c:ae:8b:29:27:fd' > compute1: > nic1: '6c:ae:8b:25:34:ea' # 9.114.219.44 > nic2: '6c:ae:8b:25:34:eb' > nic3: '6c:ae:8b:25:34:ec' # 9.114.118.??? > nic4: '6c:ae:8b:25:34:ed' > compute2: > nic1: '00:0a:f7:73:3c:c0' > nic2: '00:0a:f7:73:3c:c1' > nic3: '00:0a:f7:73:3c:c2' # 9.114.118.156 > nic4: '00:0a:f7:73:3c:c3' # 9.114.112.??? > nic5: '00:0a:f7:73:73:f4' > nic6: '00:0a:f7:73:73:f5' > nic7: '00:0a:f7:73:73:f6' # 9.114.219.134 > nic8: '00:0a:f7:73:73:f7' > __EOF__ > (undercloud) [stack at oscloud5 ~]$ openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e ~/templates/mapping-info.yaml -e ~/templates/overcloud_images.yaml -e ~/templates/environments/network-environment.yaml -e ~/templates/environments/network-isolation.yaml -e ~/templates/environments/config-debug.yaml --disable-validations --ntp-server pool.ntp.org --control-scale 1 --compute-scale > > But I did not see a /etc/os-net-config/mapping.yaml get created. > > Also is this configuration used when the system boots IronicPythonAgent to provision the disk? > > -- > Mark > > You must be the change you wish to see in the world. -- Mahatma Gandhi > Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From n_jayshankar at yahoo.com Mon Jul 9 06:44:01 2018 From: n_jayshankar at yahoo.com (jayshankar nair) Date: Mon, 9 Jul 2018 06:44:01 +0000 (UTC) Subject: [openstack-dev] swift containers. References: <842093415.1132418.1531118641150.ref@mail.yahoo.com> Message-ID: <842093415.1132418.1531118641150@mail.yahoo.com> Hi, I am unable to create containers in object store. "Unable to get the Swift service info"."Unable to get the swift container listing". My horizon is running on 192.168.0.19. My swift is running on 192.168.0.12(how can i change it).  I am trying to list the container with python sdk. Is this the right api. from openstack import connectionconn = connection.Connection(auth_url="http://192.168.0.19:5000/v3",                      project_name="admin",username="admin",                      password="6908a8d218f843dd",                      user_domain_id="default",                      project_domain_id="default",                      identity_api_version=3) for container in conn,object_store.containers():print(container.name). I need documentation of python sdk Thanks,Jayshankar -------------- next part -------------- An HTML attachment was scrubbed... URL: From n_jayshankar at yahoo.com Mon Jul 9 06:46:37 2018 From: n_jayshankar at yahoo.com (jayshankar nair) Date: Mon, 9 Jul 2018 06:46:37 +0000 (UTC) Subject: [openstack-dev] swift containers. In-Reply-To: <842093415.1132418.1531118641150@mail.yahoo.com> References: <842093415.1132418.1531118641150.ref@mail.yahoo.com> <842093415.1132418.1531118641150@mail.yahoo.com> Message-ID: <1933463637.1140026.1531118797717@mail.yahoo.com> Hi, I am unable to create containers in object store. "Unable to get the Swift service info"."Unable to get the swift container listing". My horizon is running on 192.168.0.19. My swift is running on 192.168.0.12(how can i change it).  I am trying to list the container with python sdk. Is this the right api. from openstack import connectionconn = connection.Connection(auth_url="http://192.168.0.19:5000/v3",                      project_name="admin",username="admin",                      password="6908a8d218f843dd",                      user_domain_id="default",                      project_domain_id="default",                      identity_api_version=3) for container in conn,object_store.containers():print(container.name). I need documentation of python sdk Thanks,Jayshankar -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Mon Jul 9 07:38:47 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Mon, 9 Jul 2018 16:38:47 +0900 Subject: [openstack-dev] [openstack-infra][releases] couldn't find xstatic package in pypi after release job is merged Message-ID: Hello openstack-infra team, I uploaded a patch to add a new release of xstatic-angular-material, and thanks for your work it was merged several days ago. Here is the link of the patch. https://review.openstack.org/#/c/577018/ However, I cannot find the correct version in pypi index. It still shows an initial version as 0.0.0. https://pypi.org/project/xstatic-angular-material/ There was a similar problem before but it seems to be fixed already after the following patches were merged. https://review.openstack.org/#/c/559300/ https://review.openstack.org/#/c/559373/ Could you help me with this issue please? Thank you very much. Best regards, ​​ Xinni ​ Ge​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Jul 9 08:09:39 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 9 Jul 2018 18:09:39 +1000 Subject: [openstack-dev] [openstack-infra][releases] couldn't find xstatic package in pypi after release job is merged In-Reply-To: References: Message-ID: <20180709080938.GB11943@thor.bakeyournoodle.com> On Mon, Jul 09, 2018 at 04:38:47PM +0900, Xinni Ge wrote: > Hello openstack-infra team, > > I uploaded a patch to add a new release of xstatic-angular-material, and > thanks for your work it was merged several days ago. > Here is the link of the patch. > https://review.openstack.org/#/c/577018/ > > However, I cannot find the correct version in pypi index. It still shows an > initial version as 0.0.0. > https://pypi.org/project/xstatic-angular-material/ > > There was a similar problem before but it seems to be fixed already after > the following patches were merged. > https://review.openstack.org/#/c/559300/ > https://review.openstack.org/#/c/559373/ > > Could you help me with this issue please? Thank you very much. There was a thread about this starting here: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131773.html The tools side of things has been fixed, so if you correct the version in your package and ask for a new release things should work. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From balazs.gibizer at ericsson.com Mon Jul 9 10:38:07 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Jul 2018 12:38:07 +0200 Subject: [openstack-dev] [nova]Notification update week 28 Message-ID: <1531132687.12223.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Medium] Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields https://bugs.launchpad.net/nova/+bug/1739325 Matt figured out the reason of the failure. Fix merged on master, backports are in progress. [Low] Notification sending sometimes hits the keystone API to get glance endpoints https://bugs.launchpad.net/nova/+bug/1757407 Finally I have updated the fix and Eric is now +2 so we only need a second core for https://review.openstack.org/#/c/564528/ Features -------- Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications The implementation has been merged and the bp is closed. \o/ Versioned notification transformation ------------------------------------- There are couple of patches updated recently so there are things to review on: https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky Weekly meeting -------------- The next meeting is planned to be held on 10th of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180710T170000 Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Jul 9 12:28:57 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 9 Jul 2018 15:28:57 +0300 Subject: [openstack-dev] [TripleO] easily identifying how services are configured In-Reply-To: <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> References: <8e58a5def5b66d229e9c304cbbc9f25c02d49967.camel@redhat.com> <927f5ff4ec528bdcc5877c7a1a5635c62f5f1cb5.camel@redhat.com> <5c220d66-d4e5-2b19-048c-af3a37c846a3@nemebean.com> Message-ID: <88d7f66c-4215-b032-0b98-2671f14dab21@redhat.com> On 7/6/18 7:02 PM, Ben Nemec wrote: > > > On 07/05/2018 01:23 PM, Dan Prince wrote: >> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote: >>> >>> I would almost rather see us organize the directories by service >>> name/project instead of implementation. >>> >>> Instead of: >>> >>> puppet/services/nova-api.yaml >>> puppet/services/nova-conductor.yaml >>> docker/services/nova-api.yaml >>> docker/services/nova-conductor.yaml >>> >>> We'd have: >>> >>> services/nova/nova-api-puppet.yaml >>> services/nova/nova-conductor-puppet.yaml >>> services/nova/nova-api-docker.yaml >>> services/nova/nova-conductor-docker.yaml >>> >>> (or perhaps even another level of directories to indicate >>> puppet/docker/ansible?) >> >> I'd be open to this but doing changes on this scale is a much larger >> developer and user impact than what I was thinking we would be willing >> to entertain for the issue that caused me to bring this up (i.e. how to >> identify services which get configured by Ansible). >> >> Its also worth noting that many projects keep these sorts of things in >> different repos too. Like Kolla fully separates kolla-ansible and >> kolla-kubernetes as they are quite divergent. We have been able to >> preserve some of our common service architectures but as things move >> towards kubernetes we may which to change things structurally a bit >> too. > > True, but the current directory layout was from back when we intended to > support multiple deployment tools in parallel (originally > tripleo-image-elements and puppet).  Since I think it has become clear > that it's impractical to maintain two different technologies to do > essentially the same thing I'm not sure there's a need for it now.  It's > also worth noting that kolla-kubernetes basically died because there > wasn't enough people to maintain both deployment methods, so we're not > the only ones who have found that to be true.  If/when we move to > kubernetes I would anticipate it going like the initial containers work > did - development for a couple of cycles, then a switch to the new thing > and deprecation of the old thing, then removal of support for the old > thing. > > That being said, because of the fact that the service yamls are > essentially an API for TripleO because they're referenced in user this ^^ > resource registries, I'm not sure it's worth the churn to move > everything either.  I think that's going to be an issue either way > though, it's just a question of the scope.  _Something_ is going to move > around no matter how we reorganize so it's a problem that needs to be > addressed anyway. [tl;dr] I can foresee reorganizing that API becomes a nightmare for maintainers doing backports for queens (and the LTS downstream release based on it). Now imagine kubernetes support comes within those next a few years, before we can let the old API just go... I have an example [0] to share all that pain brought by a simple move of 'API defaults' from environments/services-docker to environments/services plus environments/services-baremetal. Each time a file changes contents by its old location, like here [1], I had to run a lot of sanity checks to rebase it properly. Like checking for the updated paths in resource registries are still valid or had to/been moved as well, then picking the source of truth for diverged old vs changes locations - all that to loose nothing important in progress. So I'd say please let's do *not* change services' paths/namespaces in t-h-t "API" w/o real need to do that, when there is no more alternatives left to that. [0] https://review.openstack.org/#/q/topic:containers-default-stable/queens [1] https://review.openstack.org/#/c/567810 > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Irc #bogdando From alex at privacysystems.eu Mon Jul 9 13:57:07 2018 From: alex at privacysystems.eu (Alexandru Sorodoc) Date: Mon, 9 Jul 2018 16:57:07 +0300 Subject: [openstack-dev] [puppet] puppet-senlin development Message-ID: <2fd18fde-f7bf-e84b-27f6-b697f58e3f6b@privacysystems.eu> Hello, Is anyone working or planning to work on the puppet-senlin module? We want to use Senlin in our Pike deployment and we are considering contributing to its puppet module to bring it to a working state. Best regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Jul 9 14:09:15 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 9 Jul 2018 10:09:15 -0400 Subject: [openstack-dev] swift containers. In-Reply-To: <1933463637.1140026.1531118797717@mail.yahoo.com> References: <842093415.1132418.1531118641150.ref@mail.yahoo.com> <842093415.1132418.1531118641150@mail.yahoo.com> <1933463637.1140026.1531118797717@mail.yahoo.com> Message-ID: <41ab5d70-deaf-39a7-56e0-9a4ef4ad20c1@inaugust.com> On 07/09/2018 02:46 AM, jayshankar nair wrote: > > > > > Hi, > > I am unable to create containers in object store. > > "Unable to get the Swift service info". > "Unable to get the swift container listing". > > My horizon is running on 192.168.0.19. My swift is running on > 192.168.0.12(how can i change it). > > I am trying to list the container with python sdk. Is this the right api. > > from openstack import connection > conn = connection.Connection(auth_url="http://192.168.0.19:5000/v3", >                       project_name="admin",username="admin", >                       password="6908a8d218f843dd", >                       user_domain_id="default", >                       project_domain_id="default", >                       identity_api_version=3) That looks fine (although you don't need identity_api_version=3 - you are specifying domain_ids - it'll figure things out) Can you add: import openstack openstack.enable_logging(http_debug=True) before your current code and paste the output? > for container in conn,object_store.containers(): > print(container.name). > > I need documentation of python sdk https://docs.openstack.org/openstacksdk/latest/ > Thanks, > Jayshankar > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Mon Jul 9 15:04:28 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 9 Jul 2018 11:04:28 -0400 Subject: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox In-Reply-To: <1530823071-sup-2420@lrrr.local> References: <1530823071-sup-2420@lrrr.local> Message-ID: <56346403-8edd-8cd0-f5c1-800bb87584e3@redhat.com> On 05/07/18 16:46, Doug Hellmann wrote: > I have a governance patch up [1] to change the project-testing-interface > (PTI) for building documentation to restore the use of tox. > > We originally changed away from tox because we wanted to have a > single standard command that anyone could use to build the documentation > for a project. It turns out that is more complicated than just > running sphinx-build in a lot of cases anyway, because of course > you have a bunch of dependencies to install before sphinx-build > will work. Is this the main reason? If we think we made the wrong call (i.e. everyone has to set up a virtualenv and install doc/requirements.txt anyway so we should just make them use tox even if they are not Python projects), then I agree it makes sense to fix it even though we only _just_ finished telling people it would be the opposite way. > Updating the job that uses sphinx directly to run under python 3, > while allowing the transition to be self-testing, was going to > require writing some extra complexity to look at something in the > repository to decide what version of python to use. Since tox > handles that for us by letting us set basepython in the virtualenv > configuration, it seemed more straightforward to go back to using > tox. Wouldn't another option be to have separate Zuul jobs for Python 3 and Python 2-based sphinx builds? Then the switchover would still be self-testing. I'd rather do that if this is the main problem we're trying to solve, rather than reverse course. > So, this new PTI definition restores the use of tox and specifies > a "docs" environment. I have started defining the relevant jobs [2] > and project templates [3], and I will be updating the python3-first > transition plan as well. > > Let me know if you have any questions about any of that, > Doug > > [1] https://review.openstack.org/#/c/580495/ > [2] https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:python3-first > [3] https://review.openstack.org/#/q/project:openstack-infra/openstack-zuul-jobs+topic:python3-first > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dougal at redhat.com Mon Jul 9 15:13:09 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 9 Jul 2018 16:13:09 +0100 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews Message-ID: Hey folks, I'd like to propose that we start abandoning old Gerrit reviews. This report shows how stale and out of date some of the reviews are: http://stackalytics.com/report/reviews/mistral-group/open I would like to initially abandon anything without any activity for a year, but we might want to consider a shorter limit - maybe 6 months. Reviews can be restored, so the risk is low. What do you think? Any objections or counter suggestions? If I don't hear any complaints, I'll go ahead with this next week (or maybe the following week). Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jul 9 15:42:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 09 Jul 2018 11:42:30 -0400 Subject: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox In-Reply-To: <56346403-8edd-8cd0-f5c1-800bb87584e3@redhat.com> References: <1530823071-sup-2420@lrrr.local> <56346403-8edd-8cd0-f5c1-800bb87584e3@redhat.com> Message-ID: <1531150517-sup-2440@lrrr.local> Excerpts from Zane Bitter's message of 2018-07-09 11:04:28 -0400: > On 05/07/18 16:46, Doug Hellmann wrote: > > I have a governance patch up [1] to change the project-testing-interface > > (PTI) for building documentation to restore the use of tox. > > > > We originally changed away from tox because we wanted to have a > > single standard command that anyone could use to build the documentation > > for a project. It turns out that is more complicated than just > > running sphinx-build in a lot of cases anyway, because of course > > you have a bunch of dependencies to install before sphinx-build > > will work. > > Is this the main reason? If we think we made the wrong call (i.e. > everyone has to set up a virtualenv and install doc/requirements.txt > anyway so we should just make them use tox even if they are not Python > projects), then I agree it makes sense to fix it even though we only > _just_ finished telling people it would be the opposite way. Yes, we made the wrong call when we set the PTI to not use tox for these cases. > > Updating the job that uses sphinx directly to run under python 3, > > while allowing the transition to be self-testing, was going to > > require writing some extra complexity to look at something in the > > repository to decide what version of python to use. Since tox > > handles that for us by letting us set basepython in the virtualenv > > configuration, it seemed more straightforward to go back to using > > tox. > > Wouldn't another option be to have separate Zuul jobs for Python 3 and > Python 2-based sphinx builds? Then the switchover would still be > self-testing. > > I'd rather do that if this is the main problem we're trying to solve, > rather than reverse course. These jobs run on tag events, which are not "branch aware" (tags can be on 0 or more branches at the same time). That means we cannot have different versions of the job running for different branches. Instead we need 1 job, which uses data inside the repository to decide exactly what to do. Instead of writing a new, more complicated, job to look at a flag file or other settings to decide whether to run sphinx under python 2 or 3, it will be simpler to go back to using the old existing tox-based job and to use the tox configuration to control the version of python. Using the tox job also has the benefit of fixing the tox-siblings issue for projects like neutron plugins that need neutron installed in order to generate their documentation. So we fix 2 problems with 1 change. We actually have a similar problem for the release job, but in that case we don't need tox because we don't need to install any dependencies in order to build the artifacts. I have tested building sdists and wheels from every repo with a setup.py and did not find any failures related to using python 3, so we can just switch everyone over to use the new job. > > > So, this new PTI definition restores the use of tox and specifies > > a "docs" environment. I have started defining the relevant jobs [2] > > and project templates [3], and I will be updating the python3-first > > transition plan as well. > > > > Let me know if you have any questions about any of that, > > Doug > > > > [1] https://review.openstack.org/#/c/580495/ > > [2] https://review.openstack.org/#/q/project:openstack-infra/project-config+topic:python3-first > > [3] https://review.openstack.org/#/q/project:openstack-infra/openstack-zuul-jobs+topic:python3-first > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From doug at doughellmann.com Mon Jul 9 16:03:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 09 Jul 2018 12:03:01 -0400 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 Message-ID: <1531152060-sup-5499@lrrr.local> Heads-up, there is a new tox release out. 3.1 includes some behavior changes in the way basepython behaves (thanks, Stephen Finucan!), as well as other bug fixes. If you start seeing odd job failures, check your tox version. Doug --- Begin forwarded message from toxdevorg --- From: toxdevorg To: testing-in-python , tox-dev Date: Mon, 09 Jul 2018 08:45:15 -0700 Subject: [TIP] tox release 3.1.1 The tox team is proud to announce the 3.1.1 bug fix release! tox aims to automate and standardize testing in Python. It is part of a larger vision of easing the packaging, testing and release process of Python software. For details about the fix(es),please check the CHANGELOG: https://pypi.org/project/tox/3.1.1/#changelog We thank all present and past contributors to tox. Have a look at https://github.com/tox-dev/tox/blob/master/CONTRIBUTORS to see who contributed. Happy toxing, the tox-dev team --- End forwarded message --- From openstack at fried.cc Mon Jul 9 16:16:11 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 9 Jul 2018 11:16:11 -0500 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: <1531152060-sup-5499@lrrr.local> References: <1531152060-sup-5499@lrrr.local> Message-ID: <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> Doug- How long til we can start relying on the new behavior in the gate? I gots me some basepython to purge... -efried On 07/09/2018 11:03 AM, Doug Hellmann wrote: > Heads-up, there is a new tox release out. 3.1 includes some behavior > changes in the way basepython behaves (thanks, Stephen Finucan!), as > well as other bug fixes. > > If you start seeing odd job failures, check your tox version. > > Doug > > --- Begin forwarded message from toxdevorg --- > From: toxdevorg > To: testing-in-python , tox-dev > Date: Mon, 09 Jul 2018 08:45:15 -0700 > Subject: [TIP] tox release 3.1.1 > > The tox team is proud to announce the 3.1.1 bug fix release! > > tox aims to automate and standardize testing in Python. It is part of > a larger vision of easing the packaging, testing and release process > of Python software. > > For details about the fix(es),please check the CHANGELOG: > https://pypi.org/project/tox/3.1.1/#changelog > > We thank all present and past contributors to tox. Have a look at > https://github.com/tox-dev/tox/blob/master/CONTRIBUTORS to see who > contributed. > > Happy toxing, > the tox-dev team > > --- End forwarded message --- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tpb at dyncloud.net Mon Jul 9 16:42:19 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 9 Jul 2018 12:42:19 -0400 Subject: [openstack-dev] [manila] Planning Etherpad for Denver 2018 PTG Message-ID: <20180709164219.dbm5v3i2etwo74n7@barron.net> Here's an etherpad we can use for planning for the Denver PTG in September [1]. Please add topics as they occur to you! -- Tom Barron (tbarron) [1] https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018 From mrhillsman at gmail.com Mon Jul 9 17:27:55 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 9 Jul 2018 12:27:55 -0500 Subject: [openstack-dev] Reminder: UC Meeting Today 1800UTC / 1300CST Message-ID: Hey everyone, Please see https://wiki.openstack.org/wiki/Governance/Foundation/Us erCommittee for UC meeting info and add additional agenda items if needed. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jul 9 17:44:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 09 Jul 2018 13:44:00 -0400 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> Message-ID: <1531158191-sup-9509@lrrr.local> Excerpts from Eric Fried's message of 2018-07-09 11:16:11 -0500: > Doug- > > How long til we can start relying on the new behavior in the gate? I > gots me some basepython to purge... > > -efried Great question. I have to defer to the infra team to answer, since I'm not sure how we're managing the version of tox we use in CI. Doug > > On 07/09/2018 11:03 AM, Doug Hellmann wrote: > > Heads-up, there is a new tox release out. 3.1 includes some behavior > > changes in the way basepython behaves (thanks, Stephen Finucan!), as > > well as other bug fixes. > > > > If you start seeing odd job failures, check your tox version. > > > > Doug > > > > --- Begin forwarded message from toxdevorg --- > > From: toxdevorg > > To: testing-in-python , tox-dev > > Date: Mon, 09 Jul 2018 08:45:15 -0700 > > Subject: [TIP] tox release 3.1.1 > > > > The tox team is proud to announce the 3.1.1 bug fix release! > > > > tox aims to automate and standardize testing in Python. It is part of > > a larger vision of easing the packaging, testing and release process > > of Python software. > > > > For details about the fix(es),please check the CHANGELOG: > > https://pypi.org/project/tox/3.1.1/#changelog > > > > We thank all present and past contributors to tox. Have a look at > > https://github.com/tox-dev/tox/blob/master/CONTRIBUTORS to see who > > contributed. > > > > Happy toxing, > > the tox-dev team > > > > --- End forwarded message --- > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From tobias.urdin at crystone.com Mon Jul 9 18:44:12 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Mon, 9 Jul 2018 18:44:12 +0000 Subject: [openstack-dev] [puppet] puppet-senlin development In-Reply-To: <2fd18fde-f7bf-e84b-27f6-b697f58e3f6b@privacysystems.eu> References: <2fd18fde-f7bf-e84b-27f6-b697f58e3f6b@privacysystems.eu> Message-ID: <29AED86D-5588-405D-9257-06C942840E4F@crystone.com> Hello Alex, I personally don't know about any entity specifically working on the Puppet Senlin module. We strongly welcome anybody to contribute to the development of the Puppet OpenStack modules. We are happy to help :) Best regards Tobias Sent from my iPhone On 9 Jul 2018, at 16:00, Alexandru Sorodoc > wrote: Hello, Is anyone working or planning to work on the puppet-senlin module? We want to use Senlin in our Pike deployment and we are considering contributing to its puppet module to bring it to a working state. Best regards, Alex __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Jul 9 18:52:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Jul 2018 19:52:13 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-27 In-Reply-To: References: Message-ID: On Fri, 6 Jul 2018, Chris Dent wrote: > This is placement update 18-27, a weekly update of ongoing > development related to the [OpenStack](https://www.openstack.org/) > [placement > service](https://developer.openstack.org/api-ref/placement/). This > is a contract version. Forgot to mention: There won't be an 18-28 this Friday, I'll be out and about. If someone else would like to do one, that would be great. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From pabelanger at redhat.com Mon Jul 9 19:29:14 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 9 Jul 2018 15:29:14 -0400 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: <1531158191-sup-9509@lrrr.local> References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> <1531158191-sup-9509@lrrr.local> Message-ID: <20180709192914.GA18937@localhost.localdomain> On Mon, Jul 09, 2018 at 01:44:00PM -0400, Doug Hellmann wrote: > Excerpts from Eric Fried's message of 2018-07-09 11:16:11 -0500: > > Doug- > > > > How long til we can start relying on the new behavior in the gate? I > > gots me some basepython to purge... > > > > -efried > > Great question. I have to defer to the infra team to answer, since I'm > not sure how we're managing the version of tox we use in CI. > Should be less then 24 hours, likely sooner. We pull in the latest tox when we rebuild images each day[1]. [1] http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/infra-package-needs/install.d/10-pip-packages From emilien at redhat.com Mon Jul 9 19:34:56 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 9 Jul 2018 14:34:56 -0500 Subject: [openstack-dev] [puppet][tripleo] Why is this acceptance test failing? In-Reply-To: <20180705025222.qhubpmtyyfrrhbuk@redhat.com> References: <20180704012937.6eoffaxeeeq4oadg@redhat.com> <20180705025222.qhubpmtyyfrrhbuk@redhat.com> Message-ID: On Wed, Jul 4, 2018 at 9:53 PM Lars Kellogg-Stedman wrote: > On Wed, Jul 04, 2018 at 07:51:20PM -0600, Emilien Macchi wrote: > > The actual problem is that the manifest isn't idempotent anymore: > > > http://logs.openstack.org/47/575147/16/check/puppet-openstack-beaker-centos-7/3f70cc9/job-output.txt.gz#_2018-07-04_00_42_19_705516 > > Hey Emilien, thanks for taking a look. I'm not following -- or maybe > I'm just misreading the failure message. It really looks to me as if > the failure is caused by a regular expression; it says: > > Failure/Error: > apply_manifest(pp, :catch_changes => true) do |result| > expect(result.stderr) > .to > include_regexp([/Puppet::Type::Keystone_tenant::ProviderOpenstack: Support > for a resource without the domain.*using 'Default'.*default domain id is > '/]) > end > > And yet, the regular expression in that check clearly matches the > output shown in the failure message. What do you see that points at an > actual idempotency issue? > > (I wouldn't be at all surprised to find an actual problem in this > change; I've fixed several already. I'm just not sure how to turn > this failure into actionable information.) > Sorry for late answers, not doing a good job at catching up emails since I was 2 weeks on PTO. So in order to test if that comes from your code or not, please try the manifest yourself and run puppet 2 times. If the second time is still triggering changes in the catalog, it means the idempotency of the resource is broken, which can also mean the resource itself isn't created properly the first time and a second puppet run tries to create it again. Basically, you shouldn't see that on a second puppet run: /Stage[main]/Keystone/Keystone_domain[my_default_domain]/is_default: is_default changed 'false' to 'true' If you can't reproduce it, let me know on IRC and I'll help you but you could use https://github.com/openstack/puppet-openstack-integration/blob/master/all-in-one.sh if you need a quick way to deploy. Hope this helps, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jul 9 20:15:23 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 9 Jul 2018 15:15:23 -0500 Subject: [openstack-dev] [requirements][taskflow] networkx migration Message-ID: <20180709201523.y3qhroncve5vqmu7@gentoo.org> We have a patch that looks good, can we get it merged? https://review.openstack.org/#/c/577833/ -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack at nemebean.com Mon Jul 9 20:42:02 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Jul 2018 15:42:02 -0500 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> Message-ID: On 07/09/2018 11:16 AM, Eric Fried wrote: > Doug- > > How long til we can start relying on the new behavior in the gate? I > gots me some basepython to purge... I want to point out that most projects require a rather old version of tox, so chances are most people are not staying up to date with the very latest version. I don't love the repetition in tox.ini right now, but I also don't love that immediately bumping the lower bound for tox is going to be kind of disruptive to a lot of people. 1: http://codesearch.openstack.org/?q=minversion&i=nope&files=tox.ini&repos= > > -efried > > On 07/09/2018 11:03 AM, Doug Hellmann wrote: >> Heads-up, there is a new tox release out. 3.1 includes some behavior >> changes in the way basepython behaves (thanks, Stephen Finucan!), as >> well as other bug fixes. >> >> If you start seeing odd job failures, check your tox version. >> >> Doug >> >> --- Begin forwarded message from toxdevorg --- >> From: toxdevorg >> To: testing-in-python , tox-dev >> Date: Mon, 09 Jul 2018 08:45:15 -0700 >> Subject: [TIP] tox release 3.1.1 >> >> The tox team is proud to announce the 3.1.1 bug fix release! >> >> tox aims to automate and standardize testing in Python. It is part of >> a larger vision of easing the packaging, testing and release process >> of Python software. >> >> For details about the fix(es),please check the CHANGELOG: >> https://pypi.org/project/tox/3.1.1/#changelog >> >> We thank all present and past contributors to tox. Have a look at >> https://github.com/tox-dev/tox/blob/master/CONTRIBUTORS to see who >> contributed. >> >> Happy toxing, >> the tox-dev team >> >> --- End forwarded message --- >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Jul 9 20:58:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 09 Jul 2018 16:58:34 -0400 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> Message-ID: <1531169863-sup-281@lrrr.local> Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500: > > On 07/09/2018 11:16 AM, Eric Fried wrote: > > Doug- > > > > How long til we can start relying on the new behavior in the gate? I > > gots me some basepython to purge... > > I want to point out that most projects require a rather old version of > tox, so chances are most people are not staying up to date with the very > latest version. I don't love the repetition in tox.ini right now, but I > also don't love that immediately bumping the lower bound for tox is > going to be kind of disruptive to a lot of people. > > 1: http://codesearch.openstack.org/?q=minversion&i=nope&files=tox.ini&repos= Good point. Any patches to clean up the repetition should probably go ahead and update that minimum version setting, too. Doug From corvus at inaugust.com Mon Jul 9 21:23:11 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Jul 2018 14:23:11 -0700 Subject: [openstack-dev] [python3][tc][infra][docs] changing the documentation build PTI to use tox In-Reply-To: <1531150517-sup-2440@lrrr.local> (Doug Hellmann's message of "Mon, 09 Jul 2018 11:42:30 -0400") References: <1530823071-sup-2420@lrrr.local> <56346403-8edd-8cd0-f5c1-800bb87584e3@redhat.com> <1531150517-sup-2440@lrrr.local> Message-ID: <87o9fgdry8.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Excerpts from Zane Bitter's message of 2018-07-09 11:04:28 -0400: >> On 05/07/18 16:46, Doug Hellmann wrote: >> > I have a governance patch up [1] to change the project-testing-interface >> > (PTI) for building documentation to restore the use of tox. >> > >> > We originally changed away from tox because we wanted to have a >> > single standard command that anyone could use to build the documentation >> > for a project. It turns out that is more complicated than just >> > running sphinx-build in a lot of cases anyway, because of course >> > you have a bunch of dependencies to install before sphinx-build >> > will work. >> >> Is this the main reason? If we think we made the wrong call (i.e. >> everyone has to set up a virtualenv and install doc/requirements.txt >> anyway so we should just make them use tox even if they are not Python >> projects), then I agree it makes sense to fix it even though we only >> _just_ finished telling people it would be the opposite way. > > Yes, we made the wrong call when we set the PTI to not use tox for these > cases. > >> > Updating the job that uses sphinx directly to run under python 3, >> > while allowing the transition to be self-testing, was going to >> > require writing some extra complexity to look at something in the >> > repository to decide what version of python to use. Since tox >> > handles that for us by letting us set basepython in the virtualenv >> > configuration, it seemed more straightforward to go back to using >> > tox. >> >> Wouldn't another option be to have separate Zuul jobs for Python 3 and >> Python 2-based sphinx builds? Then the switchover would still be >> self-testing. >> >> I'd rather do that if this is the main problem we're trying to solve, >> rather than reverse course. > > These jobs run on tag events, which are not "branch aware" (tags > can be on 0 or more branches at the same time). That means we cannot > have different versions of the job running for different branches. > > Instead we need 1 job, which uses data inside the repository to > decide exactly what to do. Instead of writing a new, more complicated, > job to look at a flag file or other settings to decide whether to > run sphinx under python 2 or 3, it will be simpler to go back to > using the old existing tox-based job and to use the tox configuration > to control the version of python. Using the tox job also has the > benefit of fixing the tox-siblings issue for projects like neutron > plugins that need neutron installed in order to generate their > documentation. So we fix 2 problems with 1 change. > > We actually have a similar problem for the release job, but in that > case we don't need tox because we don't need to install any > dependencies in order to build the artifacts. I have tested building > sdists and wheels from every repo with a setup.py and did not find > any failures related to using python 3, so we can just switch > everyone over to use the new job. Indeed, this is a situation where in many cases our intuition collides with git's implementation. We've always had this restriction with Zuul (we can cause different jobs to run for different tags, but we can only do so by matching the name of the tag, not the name of the branch that people associate with the tag). If we were very consistent about release version numbers and branches across projects, we could write some configuration which ran python2 jobs on some releases and python3 jobs on others. But we aren't in that position, and doing so would require a jumble of regexes, different for each project. In Zuul v3, since much of the configuration is in-repo, the desire to alter tag/release jobs based on the content in-repo is even closer to the surface. So the desire to handle this situation better is growing, and I think stands on its own merit. To that end, we've started exploring some changes to Zuul in that direction. One of them is here: https://review.openstack.org/578557 But, even if we do land that change, I think the PTI change that Doug is proposing is the best thing for us to do in this situation. We made the PTI so that we have a really simple interface and line of demarcation where we say that, collectively, we want all projects to be able to build docs, and we're going to build a bunch of automation around that, but the PTI is the boundary between that automation and the in-repo content. It has served us very well through a number of changes to how we run unit tests. The fact that we've gone through far fewer changes to how docs are built has perhaps led us to think that we didn't need the layer of abstraction that tox provided us. However, as soon as we removed it, we encountered a situation where, in fact, it would have insulated us. Put another way, I think the spirit of the PTI is about finding the right place where the automation that we build for all the projects stops, and the project-specific implementation begins. Facilitating a project saying "this project needs python3 to build docs" in a way that is independent of the automation system is the best outcome. -Jim From smalleni at redhat.com Mon Jul 9 22:21:01 2018 From: smalleni at redhat.com (Sai Sindhur Malleni) Date: Mon, 9 Jul 2018 16:21:01 -0600 Subject: [openstack-dev] [dib] pip-and-virtualenv element failing on CentOS Message-ID: Hi all, I raised https://bugs.launchpad.net/diskimage-builder/+bug/1768135 a while ago as CentOS images using pip-and-virtual-env elements are failing to build. While exporting DIB_INSTALLTYPE_pip_and_virtualenv=package helps workaround the issue, this wasn't needed earlier. Would really appreciate any help from the dib community to debug/fix this issue. -- Sai Sindhur Malleni Software Engineer Red Hat Inc. 100 East Davie Street Raleigh, NC, USA Work: (919) 754-4557 | Cell: (919) 985-1055 -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Jul 9 23:59:42 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 9 Jul 2018 18:59:42 -0500 Subject: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure Message-ID: In gabbi, there's a way [1] to mark a test as an expected failure, which makes it show up in your stestr run thusly: {0} nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request [0.710821s] ... ok ====== Totals ====== Ran: 1 tests in 9.0000 sec. - Passed: 0 - Skipped: 0 - Expected Fail: 1 - Unexpected Success: 0 - Failed: 0 If I go fix the thing causing the heretofore-expected failure, but forget to take out the `xfail: True`, it does this: {0} nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request [0.710517s] ... FAILED {0} nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request [0.000000s] ... ok ============================== Failed 1 tests - output below: ============================== nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request ---------------------------------------------------------------------------------------------------------------------------------- ====== Totals ====== Ran: 2 tests in 9.0000 sec. - Passed: 1 - Skipped: 0 - Expected Fail: 0 - Unexpected Success: 1 - Failed: 0 BUT it does not cause the run to fail. For example, see the nova-tox-functional results for [2] (specifically PS4): the test appears twice in the middle of the run [3] and prints failure output [4] but the job passes [5]. So I'm writing this email because I have no idea if this is expected behavior or a bug (I'm hoping the latter, cause it's whack, yo); and if a bug, I have no idea whose bug it should be. Help? Thanks, efried [1] https://gabbi.readthedocs.io/en/latest/format.html?highlight=xfail [2] https://review.openstack.org/#/c/579921/4 [3] http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/job-output.txt.gz#_2018-07-09_17_22_11_846366 [4] http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/job-output.txt.gz#_2018-07-09_17_31_07_229271 [5] http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/testr_results.html.gz From mtreinish at kortar.org Tue Jul 10 03:03:48 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 9 Jul 2018 23:03:48 -0400 Subject: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure In-Reply-To: References: Message-ID: <20180710030347.GA11011@sinanju.localdomain> On Mon, Jul 09, 2018 at 06:59:42PM -0500, Eric Fried wrote: > In gabbi, there's a way [1] to mark a test as an expected failure, which > makes it show up in your stestr run thusly: > > {0} > nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request > [0.710821s] ... ok > > ====== > Totals > ====== > Ran: 1 tests in 9.0000 sec. > - Passed: 0 > - Skipped: 0 > - Expected Fail: 1 > - Unexpected Success: 0 > - Failed: 0 > > If I go fix the thing causing the heretofore-expected failure, but > forget to take out the `xfail: True`, it does this: > > {0} > nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request > [0.710517s] ... FAILED > {0} > nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request > [0.000000s] ... ok > > ============================== > Failed 1 tests - output below: > ============================== > > nova.tests.functional.api.openstack.placement.test_placement_api.allocations-1.28_put_that_allocation_to_new_consumer.test_request > ---------------------------------------------------------------------------------------------------------------------------------- > > > ====== > Totals > ====== > Ran: 2 tests in 9.0000 sec. > - Passed: 1 > - Skipped: 0 > - Expected Fail: 0 > - Unexpected Success: 1 > - Failed: 0 > > BUT it does not cause the run to fail. For example, see the > nova-tox-functional results for [2] (specifically PS4): the test appears > twice in the middle of the run [3] and prints failure output [4] but the > job passes [5]. > > So I'm writing this email because I have no idea if this is expected > behavior or a bug (I'm hoping the latter, cause it's whack, yo); and if > a bug, I have no idea whose bug it should be. Help? It's definitely a bug, and likely a bug in stestr (or one of the lower level packages like testtools or python-subunit), because that's what's generating the return code. Tox just looks at the return code from the commands to figure out if things were successful or not. I'm a bit surprised by this though I thought we covered the unxsuccess and xfail cases because I would have expected cdent to file a bug if it didn't. Looking at the stestr tests we don't have coverage for the unxsuccess case so I can see how this slipped through. Looking at the where the return code for the output from the run command is generated (it's a bit weird because run calls the load command internally which handles the output generation, result storage, and final return code): https://github.com/mtreinish/stestr/blob/master/stestr/commands/load.py#L222-L225 I'm thinking it might be an issue in testtools or python-subunit, I don't remember which generates the results object used there (if it is subunit it'll be a subclass from testtools). But I'll have to trace through it to be sure. In the mean time we can easily workaround the issue in stestr itself by just manually checking the result status instead of relying on the existing function from the results class. -Matt Treinish > > [1] https://gabbi.readthedocs.io/en/latest/format.html?highlight=xfail > [2] https://review.openstack.org/#/c/579921/4 > [3] > http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/job-output.txt.gz#_2018-07-09_17_22_11_846366 > [4] > http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/job-output.txt.gz#_2018-07-09_17_31_07_229271 > [5] > http://logs.openstack.org/21/579921/4/check/nova-tox-functional/5fb6ee9/testr_results.html.gz > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Tue Jul 10 04:06:08 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 9 Jul 2018 23:06:08 -0500 Subject: [openstack-dev] [puppet] puppet-senlin development In-Reply-To: <29AED86D-5588-405D-9257-06C942840E4F@crystone.com> References: <2fd18fde-f7bf-e84b-27f6-b697f58e3f6b@privacysystems.eu> <29AED86D-5588-405D-9257-06C942840E4F@crystone.com> Message-ID: Also please take a look at this guide to create new modules: https://docs.openstack.org/puppet-openstack-guide/latest/contributor/new-module.html Thanks and welcome! On Mon, Jul 9, 2018 at 1:46 PM Tobias Urdin wrote: > Hello Alex, > > I personally don’t know about any entity specifically working on the > Puppet Senlin module. > > We strongly welcome anybody to contribute to the development of the Puppet > OpenStack modules. > > We are happy to help :) > > Best regards > Tobias > > Sent from my iPhone > > On 9 Jul 2018, at 16:00, Alexandru Sorodoc wrote: > > Hello, > > Is anyone working or planning to work on the puppet-senlin module? We want > to use Senlin in our Pike deployment and we are considering contributing to > its puppet module to bring it to a working state. > > Best regards, > Alex > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Tue Jul 10 04:42:43 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 10 Jul 2018 11:42:43 +0700 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: Dougal, I’m totally OK with this idea. Thanks Renat Akhmerov @Nokia On 9 Jul 2018, 22:14 +0700, Dougal Matthews , wrote: > Hey folks, > > I'd like to propose that we start abandoning old Gerrit reviews. > > This report shows how stale and out of date some of the reviews are: > http://stackalytics.com/report/reviews/mistral-group/open > > I would like to initially abandon anything without any activity for a year, but we might want to consider a shorter limit - maybe 6 months. Reviews can be restored, so the risk is low. > > What do you think? Any objections or counter suggestions? > > If I don't hear any complaints, I'll go ahead with this next week (or maybe the following week). > > Cheers, > Dougal > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue Jul 10 05:44:09 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 10 Jul 2018 15:44:09 +1000 Subject: [openstack-dev] [neutron][graphql] PoC with Oslo integration In-Reply-To: References: Message-ID: <315ffaaf-f6da-22d1-d615-fc213f0a9a62@redhat.com> Hi, We're going to reschedule this one. Sorry for the inconvenience. Regards, Gilles On 02/07/18 15:17, Gilles Dubreuil wrote: > Hi, > > We now have an initial base for using GraphQL [1] as you can see from > [2]. > What we need now is too use Oslo properly to police the requests. > > The best way to achieve that would likely to use a similar approach as > the pecan hooks which are in place for v2.0. > Ultimately some of the code could be share between v2.0 and graphql > but that's not a goal or either a priority for now. > > We need Neutron developers to help with the design and to get this > moving in the right direction. > > I'm scheduling an on-line working session for next week (using either > BlueJeans or Google Hangouts)? > Please vote on doodle [2] on the best time for you (please understand > that we have to cover all time zones). > > Thanks, > Gilles > > [1] https://storyboard.openstack.org/#!/story/2002782 > [2] https://review.openstack.org/#/c/575898/ > [3] https://doodle.com/poll/43kx8nfpe6w6pvia > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 From n_jayshankar at yahoo.com Tue Jul 10 06:46:16 2018 From: n_jayshankar at yahoo.com (jayshankar nair) Date: Tue, 10 Jul 2018 06:46:16 +0000 (UTC) Subject: [openstack-dev] swift containers. In-Reply-To: <41ab5d70-deaf-39a7-56e0-9a4ef4ad20c1@inaugust.com> References: <842093415.1132418.1531118641150.ref@mail.yahoo.com> <842093415.1132418.1531118641150@mail.yahoo.com> <1933463637.1140026.1531118797717@mail.yahoo.com> <41ab5d70-deaf-39a7-56e0-9a4ef4ad20c1@inaugust.com> Message-ID: <1834063059.1812972.1531205176930@mail.yahoo.com> Hi, The debugging output is as below. python firststack.py Manager defaults:unknown running task compute.GET.servers.detail REQ: curl -g -i -X GET http://192.168.0.19:5000/v3 -H "Accept: application/json" -H "User-Agent: openstacksdk/0.11.3 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" RESP: [200] Date: Tue, 10 Jul 2018 01:15:04 GMT Server: Apache/2.4.6 (CentOS) Vary: X-Auth-Token,Accept-Encoding x-openstack-request-id: req-3c244cef-1c7c-4d51-9b18-e6e1d5418713 Content-Encoding: gzip Content-Length: 196 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: application/json RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://192.168.0.19:5000/v3/", "rel": "self"}]}} GET call to None for http://192.168.0.19:5000/v3 used request id req-3c244cef-1c7c-4d51-9b18-e6e1d5418713 Making authentication request to http://192.168.0.19:5000/v3/auth/tokens {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "2e79a56540684ebb8fc177433d67b2a5", "name": "admin"}], "expires_at": "2018-07-10T02:15:05.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "4a0d46f830044e74b1a84c93e5dbacda", "name": "admin"}, "catalog": [{"endpoints": [{"url": "http://192.168.0.19:9696", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "0b574bc23cc54bd8a1266ed858a2e87f"}, {"url": "http://192.168.0.19:9696", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "3066119a6c9147fa8e4626725c3a34ad"}, {"url": "http://192.168.0.19:9696", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "4243ebf7df0f46fbb062b828d7147ca4"}], "type": "network", "id": "2c9f0da1dc514008bdc8bf967be6eeaa", "name": "neutron"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "59c73f5064b7494faa5ca3b389403746"}, {"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "addbbfdb56a244eba884e3995a548b16"}, {"url": "http://192.168.0.19:8776/v2/4a0d46f830044e74b1a84c93e5dbacda", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "f44419832a474e3fa08716945b520219"}], "type": "volumev2", "id": "31160ee3b1c54c8ca5a90c417f4f1425", "name": "cinderv2"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "19b9b5d72f4540f183c4ab574d3efd71"}, {"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "1ded29d260604e9b9cf14706fa558a21"}, {"url": "http://192.168.0.19:8776/v3/4a0d46f830044e74b1a84c93e5dbacda", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c4021436be5845cf8efa797f27e48b63"}], "type": "volumev3", "id": "3da7323094724d35b987fe60fbc7ea38", "name": "cinderv3"}, {"endpoints": [{"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "6b5f1f96bef1441fa16947e3d2578732"}, {"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "d9dfe7db65824874af7a093f16a7ebd0"}, {"url": "http://192.168.0.19:8776/v1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "fed975472ca849b0a4d39570c3ab941b"}], "type": "volume", "id": "600f1705da8a41aeb87d22cff26a7d49", "name": "cinder"}, {"endpoints": [{"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "2bea479cb5ea4d128ce9e7f8009be760"}, {"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "babb14683847492eb3129535bda12f78"}, {"url": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c82c93df6ffa4780a1e4c8912877f710"}], "type": "compute", "id": "6b4e3642519941bbbfb9c4163da331c7", "name": "nova"}, {"endpoints": [{"url": "http://192.168.0.12:8041", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "00055fa2240248bf9e693a1d446c7c59"}, {"url": "http://192.168.0.12:8041", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "88431c7c2f67409fb0fc41fe68ec3ead"}, {"url": "http://192.168.0.12:8041", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "a3913c23488e456caee2dd66c8e584bf"}], "type": "metric", "id": "8a45ed7f83db46369fdd55126407a3bf", "name": "gnocchi"}, {"endpoints": [{"url": "http://192.168.0.12:8042", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "0289010ada1446469e2ff14de09ff780"}, {"url": "http://192.168.0.12:8042", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "bc01b0a1074646f1aa534fa5f366189e"}, {"url": "http://192.168.0.12:8042", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "fcd3113e3246472f812b2b0c5cc35388"}], "type": "alarming", "id": "9394ac9cd85c4624b81a5b1dbb5fc478", "name": "aodh"}, {"endpoints": [{"url": "http://192.168.0.19:9292", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "3912bdc1f8cd4014bb9bfc8292c9ee7c"}, {"url": "http://192.168.0.19:9292", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "7b149ecd13ed4278bc45e106b1d7fcf2"}, {"url": "http://192.168.0.19:9292", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "7d093993eeb34acdb9c7c1afe9c77144"}], "type": "image", "id": "98c16d5399264fda9cc176c5bf65cf75", "name": "glance"}, {"endpoints": [{"url": "http://192.168.0.19:8778/placement", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "13782b29b3a04325be59c9c36d24622f"}, {"url": "http://192.168.0.19:8778/placement", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "99acbe33c97b4ca382d92a6d661adb44"}, {"url": "http://192.168.0.19:8778/placement", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "c3f429a1a1bb4eef9feb9c792e8aa45c"}], "type": "placement", "id": "a363e7f064164956ba17e2916287ece2", "name": "placement"}, {"endpoints": [{"url": "http://192.168.0.12:8080/v1/AUTH_4a0d46f830044e74b1a84c93e5dbacda", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "128768ce20c44b8998a949c6e73c3eb2"}, {"url": "http://192.168.0.12:8080/v1/AUTH_4a0d46f830044e74b1a84c93e5dbacda", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "491809ebcf99486bb050f3dd7c54e91e"}, {"url": "http://192.168.0.12:8080/v1/AUTH_4a0d46f830044e74b1a84c93e5dbacda", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "59d32fdf1d01465bbfeb30291cf3edb0"}], "type": "object-store", "id": "c00d180fc57c4a75be0914ef2bbb2336", "name": "swift"}, {"endpoints": [{"url": "http://192.168.0.19:35357/v3", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "26698f851ccc44b99d1f3601b9917d9b"}, {"url": "http://192.168.0.19:5000/v3", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "724aafb2db954e3e867f841f790fb8b7"}, {"url": "http://192.168.0.19:5000/v3", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "8bef913a13f642e58458e9b098faa320"}], "type": "identity", "id": "e549698313b24f1cb0bfe1fff3066f63", "name": "keystone"}, {"endpoints": [{"url": "http://192.168.0.12:8777", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "09e23e6226b7415eaf17a5bf4d33eeb8"}, {"url": "http://192.168.0.12:8777", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "160ef2bc67534bc19b47df8328fdcf16"}, {"url": "http://192.168.0.12:8777", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "cdfdb096abf844348f8ff62187e68305"}], "type": "metering", "id": "ef0f3d2138434ce0a0d1138d0c1ce14e", "name": "ceilometer"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "e43f463b2350460fbe5cf8b7f9be080d"}, "audit_ids": ["ZAiGbnoHTGWhuyIWbWXCeg"], "issued_at": "2018-07-10T01:15:05.000000Z"}} REQ: curl -g -i -X GET http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/servers/detail -H "User-Agent: openstacksdk/0.11.3 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}89a50514ab1eb46092bab29d84440e4d41b5b127" RESP: [200] Content-Length: 4042 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-29584048-12f7-4b0c-acaf-737ca9cfd786 X-Compute-Request-Id: req-29584048-12f7-4b0c-acaf-737ca9cfd786 Date: Tue, 10 Jul 2018 01:15:05 GMT Connection: keep-alive RESP BODY: {"servers": [{"OS-EXT-STS:task_state": null, "addresses": {}, "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/servers/4fcc467b-a398-4311-bc61-0182185c345a", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/servers/4fcc467b-a398-4311-bc61-0182185c345a", "rel": "bookmark"}], "image": "", "OS-EXT-STS:vm_state": "error", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-SRV-USG:launched_at": null, "flavor": {"id": "1", "links": [{"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/1", "rel": "bookmark"}]}, "id": "4fcc467b-a398-4311-bc61-0182185c345a", "user_id": "e43f463b2350460fbe5cf8b7f9be080d", "OS-DCF:diskConfig": "AUTO", "accessIPv4": "", "accessIPv6": "", "OS-EXT-STS:power_state": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "ERROR", "updated": "2018-06-15T04:58:55Z", "hostId": "", "OS-EXT-SRV-ATTR:host": null, "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "name": "cirrosnetwork2", "created": "2018-06-15T04:50:35Z", "tenant_id": "4a0d46f830044e74b1a84c93e5dbacda", "os-extended-volumes:volumes_attached": [{"id": "8db87548-250b-4150-9077-0723ec7a59ad"}], "fault": {"message": "Build of instance 4fcc467b-a398-4311-bc61-0182185c345a aborted: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-ebc925da-efef-45b6-8e34-d6de8ba191d7)", "code": 500, "details": "  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 1840, in _do_build_and_run_instance\n    filter_properties, request_spec)\n  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2062, in _build_and_run_instance\n    bdms=block_device_mapping)\n  File \"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py\", line 220, in __exit__\n    self.force_reraise()\n  File \"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py\", line 196, in force_reraise\n    six.reraise(self.type_, self.value, self.tb)\n  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2014, in _build_and_run_instance\n    block_device_mapping) as resources:\n  File \"/usr/lib64/python2.7/contextlib.py\", line 17, in __enter__\n    return self.gen.next()\n  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 2224, in _build_resources\n    reason=e.format_message())\n", "created": "2018-06-15T04:58:51Z"}, "metadata": {}}, {"OS-EXT-STS:task_state": null, "addresses": {"int-net": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c1:97:39", "version": 4, "addr": "10.0.0.106", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/servers/2d707336-b2fc-4de8-8fd6-a6c34ce55bc1", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/servers/2d707336-b2fc-4de8-8fd6-a6c34ce55bc1", "rel": "bookmark"}], "image": "", "OS-EXT-STS:vm_state": "stopped", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-SRV-USG:launched_at": "2018-06-14T04:26:03.000000", "flavor": {"id": "1", "links": [{"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/1", "rel": "bookmark"}]}, "id": "2d707336-b2fc-4de8-8fd6-a6c34ce55bc1", "security_groups": [{"name": "default"}], "user_id": "e43f463b2350460fbe5cf8b7f9be080d", "OS-DCF:diskConfig": "AUTO", "accessIPv4": "", "accessIPv6": "", "OS-EXT-STS:power_state": 4, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "SHUTOFF", "updated": "2018-07-04T03:35:26Z", "hostId": "1e10919eb41c0be269d4b7131d3bc18ffa46ed618be8f73137b9a6c0", "OS-EXT-SRV-ATTR:host": "localhost.localdomain", "OS-SRV-USG:terminated_at": null, "key_name": null, "OS-EXT-SRV-ATTR:hypervisor_hostname": "localhost.localdomain", "name": "cirrosnetwork", "created": "2018-06-14T04:18:27Z", "tenant_id": "4a0d46f830044e74b1a84c93e5dbacda", "os-extended-volumes:volumes_attached": [{"id": "d0ef61d7-5539-4d49-b3e6-c510c879f262"}], "metadata": {}}]} GET call to compute for http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/servers/detail used request id req-29584048-12f7-4b0c-acaf-737ca9cfd786 Manager defaults:unknown ran task compute.GET.servers.detail in 1.51917791367s cirrosnetwork2 cirrosnetwork Manager defaults:unknown running task compute.GET.images.detail REQ: curl -g -i -X GET http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/images/detail -H "User-Agent: openstacksdk/0.11.3 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}89a50514ab1eb46092bab29d84440e4d41b5b127" RESP: [200] Content-Length: 1362 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-50ca450e-6a14-4a6e-9fc5-c8d88f68551b X-Compute-Request-Id: req-50ca450e-6a14-4a6e-9fc5-c8d88f68551b Date: Tue, 10 Jul 2018 01:15:06 GMT Connection: keep-alive RESP BODY: {"images": [{"status": "ACTIVE", "updated": "2018-06-14T04:10:26Z", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/images/566a3a3f-aeee-44bb-a1ea-0f1f292a7fae", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/images/566a3a3f-aeee-44bb-a1ea-0f1f292a7fae", "rel": "bookmark"}, {"href": "http://192.168.0.19:9292/images/566a3a3f-aeee-44bb-a1ea-0f1f292a7fae", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "566a3a3f-aeee-44bb-a1ea-0f1f292a7fae", "OS-EXT-IMG-SIZE:size": 13287936, "name": "cirros", "created": "2018-06-14T04:10:24Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}, {"status": "ACTIVE", "updated": "2018-06-13T02:53:54Z", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/images/7033fdbe-a0fe-4599-b987-ae70b398402c", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/images/7033fdbe-a0fe-4599-b987-ae70b398402c", "rel": "bookmark"}, {"href": "http://192.168.0.19:9292/images/7033fdbe-a0fe-4599-b987-ae70b398402c", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "7033fdbe-a0fe-4599-b987-ae70b398402c", "OS-EXT-IMG-SIZE:size": 13267968, "name": "cirros", "created": "2018-06-13T02:53:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}]} GET call to compute for http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/images/detail used request id req-50ca450e-6a14-4a6e-9fc5-c8d88f68551b Manager defaults:unknown ran task compute.GET.images.detail in 0.634311914444s cirros cirros Manager defaults:unknown running task compute.GET.flavors.detail REQ: curl -g -i -X GET http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/detail -H "User-Agent: openstacksdk/0.11.3 keystoneauth1/3.4.0 python-requests/2.14.2 CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}89a50514ab1eb46092bab29d84440e4d41b5b127" RESP: [200] Content-Length: 2099 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Openstack-Request-Id: req-23a3b1da-cab6-4883-a1c0-79cceb2be9be X-Compute-Request-Id: req-23a3b1da-cab6-4883-a1c0-79cceb2be9be Date: Tue, 10 Jul 2018 01:15:06 GMT Connection: keep-alive RESP BODY: {"flavors": [{"name": "m1.tiny", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/1", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 1, "id": "1"}, {"name": "m1.small", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/2", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/2", "rel": "bookmark"}], "ram": 2048, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 20, "id": "2"}, {"name": "m1.medium", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/3", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/3", "rel": "bookmark"}], "ram": 4096, "OS-FLV-DISABLED:disabled": false, "vcpus": 2, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 40, "id": "3"}, {"name": "m1.large", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/4", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/4", "rel": "bookmark"}], "ram": 8192, "OS-FLV-DISABLED:disabled": false, "vcpus": 4, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 80, "id": "4"}, {"name": "m1.xlarge", "links": [{"href": "http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/5", "rel": "self"}, {"href": "http://192.168.0.19:8774/4a0d46f830044e74b1a84c93e5dbacda/flavors/5", "rel": "bookmark"}], "ram": 16384, "OS-FLV-DISABLED:disabled": false, "vcpus": 8, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 160, "id": "5"}]} GET call to compute for http://192.168.0.19:8774/v2.1/4a0d46f830044e74b1a84c93e5dbacda/flavors/detail used request id req-23a3b1da-cab6-4883-a1c0-79cceb2be9be Manager defaults:unknown ran task compute.GET.flavors.detail in 0.0493588447571s m1.tiny m1.small m1.medium m1.large m1.xlarge [root at localhost python]# Thanks, Jayshankar On Monday, July 9, 2018, 2:10:21 PM GMT, Monty Taylor wrote: On 07/09/2018 02:46 AM, jayshankar nair wrote: > > > > > Hi, > > I am unable to create containers in object store. > > "Unable to get the Swift service info". > "Unable to get the swift container listing". > > My horizon is running on 192.168.0.19. My swift is running on > 192.168.0.12(how can i change it). > > I am trying to list the container with python sdk. Is this the right api. > > from openstack import connection > conn = connection.Connection(auth_url="http://192.168.0.19:5000/v3", >                        project_name="admin",username="admin", >                        password="6908a8d218f843dd", >                        user_domain_id="default", >                        project_domain_id="default", >                        identity_api_version=3) That looks fine (although you don't need identity_api_version=3 - you are specifying domain_ids - it'll figure things out) Can you add: import openstack openstack.enable_logging(http_debug=True) before your current code and paste the output? > for container in conn,object_store.containers(): > print(container.name). > > I need documentation of python sdk https://docs.openstack.org/openstacksdk/latest/ > Thanks, > Jayshankar > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From apetrich at redhat.com Tue Jul 10 07:47:21 2018 From: apetrich at redhat.com (Adriano Petrich) Date: Tue, 10 Jul 2018 08:47:21 +0100 Subject: [openstack-dev] [mistral] Clearing out old gerrit reviews In-Reply-To: References: Message-ID: Agreed. On 10 July 2018 at 05:42, Renat Akhmerov wrote: > Dougal, I’m totally OK with this idea. > > Thanks > > Renat Akhmerov > @Nokia > On 9 Jul 2018, 22:14 +0700, Dougal Matthews , wrote: > > Hey folks, > > I'd like to propose that we start abandoning old Gerrit reviews. > > This report shows how stale and out of date some of the reviews are: > http://stackalytics.com/report/reviews/mistral-group/open > > I would like to initially abandon anything without any activity for a > year, but we might want to consider a shorter limit - maybe 6 months. > Reviews can be restored, so the risk is low. > > What do you think? Any objections or counter suggestions? > > If I don't hear any complaints, I'll go ahead with this next week (or > maybe the following week). > > Cheers, > Dougal > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue Jul 10 08:20:45 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 10 Jul 2018 09:20:45 +0100 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume Message-ID: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Data retained after deletion of a ScaleIO volume --- ### Summary ### Certain storage volume configurations allow newly created volumes to contain previous data. This could lead to leakage of sensitive information between tenants. ### Affected Services / Software ### Cinder releases up to and including Queens with ScaleIO volumes using thin volumes and zero padding. ### Discussion ### Using both thin volumes and zero padding does not ensure data contained in a volume is actually deleted. The default volume provisioning rule is set to thick so most installations are likely not affected. Operators can check their configuration in `cinder.conf` or check for zero padding with this command `scli --query_all`. #### Recommended Actions #### Operators can use the following two workarounds, until the release of Rocky (planned 30th August 2018) which resolves the issue. 1. Swap to thin volumes 2. Ensure ScaleIO storage pools use zero-padding with: `scli --modify_zero_padding_policy (((--protection_domain_id | --protection_domain_name ) --storage_pool_name ) | --storage_pool_id ) (--enable_zero_padding | --disable_zero_padding)` ### Contacts / References ### Author: Nick Tait This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 Mailing List : [Security] tag on openstack-dev at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From n_jayshankar at yahoo.com Tue Jul 10 09:04:02 2018 From: n_jayshankar at yahoo.com (jayshankar nair) Date: Tue, 10 Jul 2018 09:04:02 +0000 (UTC) Subject: [openstack-dev] creating instance References: <542814897.1850308.1531213442171.ref@mail.yahoo.com> Message-ID: <542814897.1850308.1531213442171@mail.yahoo.com> Hi, I  am trying to create an instance of cirros os(Project/Compute/Instances). I am getting the following error. Error: Failed to perform requested operation on instance "cirros1", the instance has an error status: Please try again later [Error: Build of instance 5de65e6d-fca6-4e78-a688-ead942e8ed2a aborted: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-91535564-4caf-4975-8eff-7bca515d414e)]. How to debug the error. Thanks,Jayshankar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jul 10 09:16:37 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 10 Jul 2018 10:16:37 +0100 (BST) Subject: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure In-Reply-To: <20180710030347.GA11011@sinanju.localdomain> References: <20180710030347.GA11011@sinanju.localdomain> Message-ID: On Mon, 9 Jul 2018, Matthew Treinish wrote: > It's definitely a bug, and likely a bug in stestr (or one of the lower level > packages like testtools or python-subunit), because that's what's generating > the return code. Tox just looks at the return code from the commands to figure > out if things were successful or not. I'm a bit surprised by this though I > thought we covered the unxsuccess and xfail cases because I would have expected > cdent to file a bug if it didn't. Looking at the stestr tests we don't have > coverage for the unxsuccess case so I can see how this slipped through. This was reported on testrepository some years ago and a bit of analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 So yeah, I did file a bug but it fell off the radar during those dark times. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From balazs.gibizer at ericsson.com Tue Jul 10 11:41:00 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 10 Jul 2018 13:41:00 +0200 Subject: [openstack-dev] [nova]Notification update week 28 In-Reply-To: <1531132687.12223.1@smtp.office365.com> References: <1531132687.12223.1@smtp.office365.com> Message-ID: <1531222860.32275.1@smtp.office365.com> On Mon, Jul 9, 2018 at 12:38 PM, Balázs Gibizer wrote: > Hi, > > Here is the latest notification subteam update. [...] > > Weekly meeting > -------------- > The next meeting is planned to be held on 10th of June on > #openstack-meeting-4 > https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180710T170000 I cannot make it to the meeting today. Sorry for the short notice but the meeting is cancelled. Cheers, gibi > > Cheers, > gibi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 270162781 at qq.com Tue Jul 10 12:21:00 2018 From: 270162781 at qq.com (=?gb18030?B?MjcwMTYyNzgx?=) Date: Tue, 10 Jul 2018 20:21:00 +0800 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that cannot attending the comming upstream meeting so I'm sending out this report: Last week there are some high priority bugs for neutron . Also some bugs need to attention, I list them here: [High] Deleting a port on a system with 1K ports takes too long https://bugs.launchpad.net/neutron/+bug/1779882 As the desciption, delete 1k port will cost mostly 35s, and seems related to the policy checking for consume much time. We need to check how to increase the performance if it's an issue. Also, thanks @Ajo for correct. L3 AttributeError in doc job https://bugs.launchpad.net/neutron/+bug/1779801 Queens neutron broken with recent L3 removal from neutron-lib.constants https://bugs.launchpad.net/neutron/+bug/1780376 These bugs need to be attentioned, as the neutron-lib increase and remove some in-use code(https://github.com/openstack/neutron-lib/commit/ec829f9384547864aebb56390da8e17df7051aac). It already affected Neutron Queens release. [Medium] A race condition may occur when concurrent agent scheduling happens https://bugs.launchpad.net/neutron/+bug/1780357 The DHCP and L3 agent maybe in race condition during scheduling process. [Need Attention] Sending SIGHUP to neutron-server process causes it to hang https://bugs.launchpad.net/neutron/+bug/1780139 This bug hit in Queens release and in container env, need help from someone who is familiar with it to test. [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable fwaas-v1 https://bugs.launchpad.net/neutron/+bug/1779978 FWAAS V1: Add or remove firewall rules, caused the status of associated firewall becomes "PENDING_UPDATE" https://bugs.launchpad.net/neutron/+bug/1780883 These 2 bugs seems hit the same issue, I will fix and associated with the first one. Both of them seems the FWv1 devstack configuration issue. Thanks, Best Regards, ZhaoBo -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jul 10 13:52:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 10 Jul 2018 14:52:30 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-28 Message-ID: HTML: https://anticdent.org/tc-report-18-28.html With feature freeze approaching at the end of this month, it seems that people are busily working on getting-stuff-done so there is not vast amounts of TC discussion to report this week. Actually that's not entirely true. There's quite a bit of interesting discussion in [the logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/) but it ranges widely and resists summary. If you're a fast reader, it can be pretty straightforward to read the whole week. Some highlights: ## Contextualizing Change The topics of sharing personal context, creating a new technical vision for OpenStack, and trying to breach the boundaries between the various OpenStack sub-projects flowed in amongst one another. In a vast bit of background and perspective sharing, Zane provided his feelings on [what OpenStack ought to be](http://lists.openstack.org/pipermail/openstack-dev/2018-July/132047.html). While long, such things help provide much more context to understanding some of the issues. Reading such things can be effort, but they fill in blanks in understanding, even if you don't agree. Meanwhile, and related, there are [continued requests](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-06.log.html#t2018-07-06T15:20:50) for nova to engage in orchestration, in large part because there's nothing else commonly available to do it and while that's true we can't serve people's needs well. Some have said that the need for orchestration could in part be addressed by breaking down some of the boundaries between projects but [which boundaries is unclear](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T01:12:27). Thierry says we should [organize work based on objectives](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T08:33:44). ## Goals of Health Tracking In [last week's report](/tc-report-18-27.html) I drew a connection between the [removal of diversity tags](https://review.openstack.org/#/c/579870/) and the [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker). This [created some](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:29:01) concern that there were going to be renewed evaluations of projects that would impact their standing in the community and that these evaluations were going to be too subjective. While it is true that the health tracker is a subjective review of how a project is doing, the evaluation is a way to discover and act on opportunities to help a project, not punish it or give it a black mark. It is important, however, that the TC is making an [independent evaluation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:45:59). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jistr at redhat.com Tue Jul 10 14:20:55 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Tue, 10 Jul 2018 16:20:55 +0200 Subject: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks Message-ID: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> Hi, with the move to config-download deployments, we'll be moving from executing external installers (like ceph-ansible) via Heat resources encapsulating Mistral workflows towards executing them via Ansible directly (nested Ansible process via external_deploy_tasks). Updates and upgrades still need to be addressed here. I think we should introduce external_update_tasks and external_upgrade_tasks for this purpose, but i see two options how to construct the workflow with them. During update (mentioning just updates, but upgrades would be done analogously) we could either: A) Run external_update_tasks, then external_deploy_tasks. This works with the assumption that updates are done very similarly to deployment. The external_update_tasks could do some prep work and/or export Ansible variables which then could affect what external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably override the playbook path). This way we could also disable specific parts of external_deploy_tasks on update, in case reuse is undesirable in some places. B) Run only external_update_tasks. This would mean code for updates/upgrades of externally deployed services would be completely separated from how their deployment is done. If we wanted to reuse some of the deployment tasks, we'd have to use the YAML anchor referencing mechanisms. (&anchor, *anchor) I think the options are comparable in terms of what is possible to implement with them, the main difference is what use cases we want to optimize for. Looking at what we currently have in external_deploy_tasks (e.g. [1][2]), i think we'd have to do a lot of explicit reuse if we went with B (inventory and variables generation, ...). So i'm leaning towards option A (WIP patch at [3]) which should give us this reuse more naturally. This approach would also be more in line with how we already do normal updates and upgrades (also reusing deployment tasks). Please let me know in case you have any concerns about such approach (looking especially at Ceph and OpenShift integrators :) ). Thanks Jirka [1] https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467 [2] https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231 [3] https://review.openstack.org/#/c/579170/ From jim at jimrollenhagen.com Tue Jul 10 14:41:36 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 10 Jul 2018 10:41:36 -0400 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume In-Reply-To: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> References: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Message-ID: On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: > Data retained after deletion of a ScaleIO volume > --- > > ### Summary ### > Certain storage volume configurations allow newly created volumes to > contain previous data. This could lead to leakage of sensitive > information between tenants. > > ### Affected Services / Software ### > Cinder releases up to and including Queens with ScaleIO volumes > using thin volumes and zero padding. > According to discussion in the bug, this bug occurs with ScaleIO volumes using thick volumes and with zero padding disabled. If the bug is with thin volumes and zero padding, then the workaround seems quite wrong. :) I'm not super familiar with Cinder, so could some Cinder folks check this out and re-issue a more accurate OSSN, please? // jim > > ### Discussion ### > Using both thin volumes and zero padding does not ensure data contained > in a volume is actually deleted. The default volume provisioning rule is > set to thick so most installations are likely not affected. Operators > can check their configuration in `cinder.conf` or check for zero padding > with this command `scli --query_all`. > > #### Recommended Actions #### > > Operators can use the following two workarounds, until the release of > Rocky (planned 30th August 2018) which resolves the issue. > > 1. Swap to thin volumes > > 2. Ensure ScaleIO storage pools use zero-padding with: > > `scli --modify_zero_padding_policy > (((--protection_domain_id | > --protection_domain_name ) > --storage_pool_name ) | --storage_pool_id ) > (--enable_zero_padding | --disable_zero_padding)` > > ### Contacts / References ### > Author: Nick Tait > This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 > Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 > Mailing List : [Security] tag on openstack-dev at lists.openstack.org > OpenStack Security Project : https://launchpad.net/~openstack-ossg > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Tue Jul 10 15:03:19 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 10 Jul 2018 09:03:19 -0600 Subject: [openstack-dev] creating instance In-Reply-To: <542814897.1850308.1531213442171@mail.yahoo.com> References: <542814897.1850308.1531213442171.ref@mail.yahoo.com> <542814897.1850308.1531213442171@mail.yahoo.com> Message-ID: <5B44CAB7.2080007@windriver.com> On 07/10/2018 03:04 AM, jayshankar nair wrote: > Hi, > > I am trying to create an instance of cirros os(Project/Compute/Instances). I am > getting the following error. > > Error: Failed to perform requested operation on instance "cirros1", the instance > has an error status: Please try again later [Error: Build of instance > 5de65e6d-fca6-4e78-a688-ead942e8ed2a aborted: The server has either erred or is > incapable of performing the requested operation. (HTTP 500) (Request-ID: > req-91535564-4caf-4975-8eff-7bca515d414e)]. > > How to debug the error. You'll want to look at the logs for the individual service. Since you were trying to create a server instance, you probably want to start with the logs for the "nova-api" service to see if there are any failure messages. You can then check the logs for "nova-scheduler", "nova-conductor", and "nova-compute". There should be something useful in one of those. Chris From jaypipes at gmail.com Tue Jul 10 15:52:20 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 10 Jul 2018 11:52:20 -0400 Subject: [openstack-dev] [nova] [placement] placement update 18-27 In-Reply-To: References: Message-ID: On 07/09/2018 02:52 PM, Chris Dent wrote: > On Fri, 6 Jul 2018, Chris Dent wrote: > >> This is placement update 18-27, a weekly update of ongoing >> development related to the [OpenStack](https://www.openstack.org/) >> [placement >> service](https://developer.openstack.org/api-ref/placement/). This >> is a contract version. > > Forgot to mention: There won't be an 18-28 this Friday, I'll be out > and about. If someone else would like to do one, that would be > great. On it. -jay From prometheanfire at gentoo.org Tue Jul 10 15:59:33 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 10 Jul 2018 10:59:33 -0500 Subject: [openstack-dev] [requirements][taskflow] networkx migration In-Reply-To: <20180709201523.y3qhroncve5vqmu7@gentoo.org> References: <20180709201523.y3qhroncve5vqmu7@gentoo.org> Message-ID: <20180710155933.g362uzftzixcdpcy@gentoo.org> On 18-07-09 15:15:23, Matthew Thode wrote: > We have a patch that looks good, can we get it merged? > > https://review.openstack.org/#/c/577833/ > Anyone from taskflow around? Maybe it's better to just mail the ptl. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jaypipes at gmail.com Tue Jul 10 16:09:36 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 10 Jul 2018 12:09:36 -0400 Subject: [openstack-dev] [nova] [placement] placement update 18-27 In-Reply-To: References: Message-ID: <5df65d3c-ccf9-4e24-c516-95a5964744c4@gmail.com> On 07/06/2018 10:09 AM, Chris Dent wrote: > # Questions > > * Will consumer id, project and user id always be a UUID? We've >   established for certain that user id will not, but things are >   less clear for the other two. This issue is compounded by the >   fact that these two strings are different but the same UUID: >   5eb033fd-c550-420e-a31c-3ec2703a403c, >   5eb033fdc550420ea31c3ec2703a403c (bug 1758057 mentioned above) but >   we treat them differently in our code. As mentioned by a couple people on IRC, a consumer's external project identifier and external user identifier come directly from Keystone. Since Keystone has no rule about these values being UUIDs or even UUID-like, we clearly cannot treat them as UUIDs in the placement service. Our backend data storage for these attributes is suitably a String(255) column and there is no validation done on these values. In fact, the project and user external identifiers are taken directly from the nova.context WSGI environ when sending from the placement client [1]. So, really, the only thing we're discussing is whether consumer_id is always a UUID. I believe it should be, and the fact that it's referred to as consumer_uuid in so many places should be indicative of its purpose. I know originally the field in the DB was a String(64), but it's since been changed to String(36), further evidence that consumer_id was intended to be a UUID. I believe we should validate it as such at the placement API layer. The only current consumers in the placement service are instances and migrations, both of which use a UUID identifier. I don't think it's too onerous to require future consumers to be identified with a UUID, and it would be nice to be able to rely on a structured, agreed format for unique identification of consumers across services. As noted the project_id and user_id are not required to be UUIDs and I don't believe we should add any validation for those fields. Best, -jay [1] For those curious, nova-scheduler calls scheduler.utils.claim_resources(...): https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/scheduler/filter_scheduler.py#L219 which itself calls reportclient.claim_resources(...) with the instance.user_id and instance.project_id values: https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/scheduler/utils.py#L500 The instance.project_id and instance.user_id values are populated from the WSGI environ here: https://github.com/openstack/nova/blob/8469fa70dafa83cb068538679100bede7679edc3/nova/compute/api.py#L831-L832 From doug at doughellmann.com Tue Jul 10 16:21:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 10 Jul 2018 12:21:35 -0400 Subject: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed In-Reply-To: References: Message-ID: <1531239660-sup-276@lrrr.local> Excerpts from zuul's message of 2018-07-10 06:38:24 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ : FAILURE in 6m 31s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > The release job failed trying to pip install something due to an SSL error. http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 From juliaashleykreger at gmail.com Tue Jul 10 16:28:05 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 10 Jul 2018 12:28:05 -0400 Subject: [openstack-dev] [ironic] "mid-cycle" call Tuesday, July 17th - 3 PM UTC Message-ID: Fellow ironicans! Lend me your ears! With the cycle quickly coming to a close, we wanted to take a couple hours for high bandwidth discussions covering the end of cycle for Ironic, as well as any items that need to be established in advance of the PTG. We're going to use bluejeans[1] since it seems to work well for everyone, and I've posted a rough agenda[2] to an etherpad. If there are additional items, please feel free to add them to the etherpad. -Julia [1]: https://bluejeans.com/437242882/ [2]: https://etherpad.openstack.org/p/ironic-rocky-midcycle From doug at doughellmann.com Tue Jul 10 16:29:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 10 Jul 2018 12:29:21 -0400 Subject: [openstack-dev] [requirements][taskflow] networkx migration In-Reply-To: <20180710155933.g362uzftzixcdpcy@gentoo.org> References: <20180709201523.y3qhroncve5vqmu7@gentoo.org> <20180710155933.g362uzftzixcdpcy@gentoo.org> Message-ID: <1531240097-sup-9184@lrrr.local> Excerpts from Matthew Thode's message of 2018-07-10 10:59:33 -0500: > On 18-07-09 15:15:23, Matthew Thode wrote: > > We have a patch that looks good, can we get it merged? > > > > https://review.openstack.org/#/c/577833/ > > > > Anyone from taskflow around? Maybe it's better to just mail the ptl. > We could use more reviewers on taskflow (and have needed them for a while). Perhaps we can get some of the consuming projects to give it a little live so the Oslo folks who are less familiar with it feel confident of the change(s) this close to the final release date for non-client libraries. Doug From doug at doughellmann.com Tue Jul 10 17:32:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 10 Jul 2018 13:32:17 -0400 Subject: [openstack-dev] [tc] Technical Committee update for 2018-07-10 Message-ID: <1531243892-sup-7915@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Other approved changes: * remove project team diversity tags: https://review.openstack.org/#/c/579870/ Office hour logs: * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-04.log.html#t2018-07-04T01:00:01 * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-05.log.html#t2018-07-05T15:00:09 * http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-10.log.html#t2018-07-10T09:01:41 == Ongoing Discussions == The Adjutant team application as the minimum number of votes required to be approved. It could not be formally accepted until 14 July and I know we have several TC members traveling this week so I will hold it open until next week to allow for final votes and discussion. * https://review.openstack.org/553643 Colleen is going to contact the election officials about scheduling the elections for the end of Rocky / beginning of Stein. Project team "health check" discussions are continuing. As Chris mentioned in his email this week, the point of this process is to have TC members actively engage with each team to understand any potential issues they are facing. We have a few teams impacted by the ZTE situation, and we have a few other teams with some affiliation diversity concerns that we would like to try to help address. We have also discovered that some teams are healthier than we expected based on how obvious (or not) their activity was. * http://lists.openstack.org/pipermail/openstack-dev/2018-July/132101.html I have made a few revisions to the python3-first goal based on feedback on the patch and testing. I expect a few more small updates with links to examples. * https://review.openstack.org/575933 I have also proposed a PTI update for the documentation jobs that is a prerequisite to moving ahead with the python 3 changes during Stein. * https://review.openstack.org/580495 * http://lists.openstack.org/pipermail/openstack-dev/2018-July/132025.html == TC member actions/focus/discussions for the coming week(s) == Thierry's changes to the Project Team Guide to include a technical guidance section need reviewers. * https://review.openstack.org/#/c/578070/1 Zane needs to update the proposal for diversity requirements or guidance for new project teams based on existing feedback. * https://review.openstack.org/#/c/567944/ Please vote on the Adjutant team application. https://review.openstack.org/553643 Remember that we agreed to send status updates on initiatives separately to openstack-dev every two weeks. If you are working on something for which there has not been an update in a couple of weeks, please consider summarizing the status. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From fungi at yuggoth.org Tue Jul 10 19:00:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 10 Jul 2018 19:00:47 +0000 Subject: [openstack-dev] [first-contact] Recommendations for contributing organizations In-Reply-To: <20180710185618.gzfzx3sz2oeszj7q@yuggoth.org> References: <20180612195325.lr364w6skajhhtow@yuggoth.org> <20180710185618.gzfzx3sz2oeszj7q@yuggoth.org> Message-ID: <20180710190047.chvv2egwe5imngds@yuggoth.org> If you're interested in helping with an addition to the Contributor Guide detailing places where those employing contributors to OpenStack might be able to help improve the experience for their employees and increase their ability to succeed within the community, please chime in on this SIGs ML thread or the review linked from it: http://lists.openstack.org/pipermail/openstack-sigs/2018-July/000429.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gr at ham.ie Tue Jul 10 19:03:51 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 10 Jul 2018 20:03:51 +0100 Subject: [openstack-dev] [designate] Meeting tomorrow Message-ID: <8a0494f3-8af8-574f-3ae9-14ad71c56b2e@ham.ie> Unfortunately something has come up and I have an appointment I have to be at for our scheduled slot (11:00 UTC). Can someone else chair, or we can post pone the meeting for 1 week. Does anyone have any preferences? Thanks, - Graham From mtreinish at kortar.org Tue Jul 10 19:16:14 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 10 Jul 2018 15:16:14 -0400 Subject: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure In-Reply-To: References: <20180710030347.GA11011@sinanju.localdomain> Message-ID: <20180710191614.GC19605@sinanju.localdomain> On Tue, Jul 10, 2018 at 10:16:37AM +0100, Chris Dent wrote: > On Mon, 9 Jul 2018, Matthew Treinish wrote: > > > It's definitely a bug, and likely a bug in stestr (or one of the lower level > > packages like testtools or python-subunit), because that's what's generating > > the return code. Tox just looks at the return code from the commands to figure > > out if things were successful or not. I'm a bit surprised by this though I > > thought we covered the unxsuccess and xfail cases because I would have expected > > cdent to file a bug if it didn't. Looking at the stestr tests we don't have > > coverage for the unxsuccess case so I can see how this slipped through. > > This was reported on testrepository some years ago and a bit of > analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196 > This actually helps a lot, because I was seeing the same issue when I tried writing a quick patch to address this. When I manually poked the TestResult object it didn't have anything in the unxsuccess list. So instead of relying on that I wrote this patch: https://github.com/mtreinish/stestr/pull/188 which uses the output filter's internal function for counting results to find unxsuccess tests. It's still not perfect though because if someone runs with the --no-subunit-trace flag it still doesn't work (because that call path never gets run) but it's at least a starting point. I've marked it as WIP for now, but I'm thinking we could merge it as is and leave the --no-subunit-trace and unxsuccess as a known issues for now, since xfail and unxsuccess are pretty uncommon in practice. (gabbi is the only thing I've seen really use it) -Matt Treinish > So yeah, I did file a bug but it fell off the radar during those > dark times. > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From johfulto at redhat.com Tue Jul 10 19:18:29 2018 From: johfulto at redhat.com (John Fulton) Date: Tue, 10 Jul 2018 15:18:29 -0400 Subject: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks In-Reply-To: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> References: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> Message-ID: On Tue, Jul 10, 2018 at 10:21 AM Jiří Stránský wrote: > > Hi, > > with the move to config-download deployments, we'll be moving from > executing external installers (like ceph-ansible) via Heat resources > encapsulating Mistral workflows towards executing them via Ansible > directly (nested Ansible process via external_deploy_tasks). > > Updates and upgrades still need to be addressed here. I think we should > introduce external_update_tasks and external_upgrade_tasks for this > purpose, but i see two options how to construct the workflow with them. > > During update (mentioning just updates, but upgrades would be done > analogously) we could either: > > A) Run external_update_tasks, then external_deploy_tasks. > > This works with the assumption that updates are done very similarly to > deployment. The external_update_tasks could do some prep work and/or > export Ansible variables which then could affect what > external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably > override the playbook path). This way we could also disable specific > parts of external_deploy_tasks on update, in case reuse is undesirable > in some places. > > B) Run only external_update_tasks. > > This would mean code for updates/upgrades of externally deployed > services would be completely separated from how their deployment is > done. If we wanted to reuse some of the deployment tasks, we'd have to > use the YAML anchor referencing mechanisms. (&anchor, *anchor) > > I think the options are comparable in terms of what is possible to > implement with them, the main difference is what use cases we want to > optimize for. > > Looking at what we currently have in external_deploy_tasks (e.g. > [1][2]), i think we'd have to do a lot of explicit reuse if we went with > B (inventory and variables generation, ...). So i'm leaning towards > option A (WIP patch at [3]) which should give us this reuse more > naturally. This approach would also be more in line with how we already > do normal updates and upgrades (also reusing deployment tasks). Please > let me know in case you have any concerns about such approach (looking > especially at Ceph and OpenShift integrators :) ). Thanks for thinking of this Jirka. I like option A and your WIP patch (579170). As you say, it fits with what we're already doing and avoids explicit reuse. John > > Thanks > > Jirka > > [1] > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467 > [2] > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231 > [3] https://review.openstack.org/#/c/579170/ From martin.chlumsky at gmail.com Tue Jul 10 19:28:11 2018 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Tue, 10 Jul 2018 15:28:11 -0400 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume In-Reply-To: References: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Message-ID: It is the workaround that is right and the discussion part that is wrong. I am familiar with this bug. Using thin volumes *and/or* enabling zero padding DOES ensure data contained in a volume is actually deleted. On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen wrote: > On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: > >> Data retained after deletion of a ScaleIO volume >> --- >> >> ### Summary ### >> Certain storage volume configurations allow newly created volumes to >> contain previous data. This could lead to leakage of sensitive >> information between tenants. >> >> ### Affected Services / Software ### >> Cinder releases up to and including Queens with ScaleIO volumes >> using thin volumes and zero padding. >> > > According to discussion in the bug, this bug occurs with ScaleIO volumes > using thick volumes and with zero padding disabled. > > If the bug is with thin volumes and zero padding, then the workaround > seems quite wrong. :) > > I'm not super familiar with Cinder, so could some Cinder folks check this > out and re-issue a more accurate OSSN, please? > > // jim > > >> >> ### Discussion ### >> Using both thin volumes and zero padding does not ensure data contained >> in a volume is actually deleted. The default volume provisioning rule is >> set to thick so most installations are likely not affected. Operators >> can check their configuration in `cinder.conf` or check for zero padding >> with this command `scli --query_all`. >> >> #### Recommended Actions #### >> >> Operators can use the following two workarounds, until the release of >> Rocky (planned 30th August 2018) which resolves the issue. >> >> 1. Swap to thin volumes >> >> 2. Ensure ScaleIO storage pools use zero-padding with: >> >> `scli --modify_zero_padding_policy >> (((--protection_domain_id | >> --protection_domain_name ) >> --storage_pool_name ) | --storage_pool_id ) >> (--enable_zero_padding | --disable_zero_padding)` >> >> ### Contacts / References ### >> Author: Nick Tait >> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 >> Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 >> Mailing List : [Security] tag on openstack-dev at lists.openstack.org >> OpenStack Security Project : https://launchpad.net/~openstack-ossg >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Jul 10 20:01:19 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Jul 2018 15:01:19 -0500 Subject: [openstack-dev] [Openstack-sigs] [first-contact] Forum summary on recommendations for contributing organizations In-Reply-To: <20180710185618.gzfzx3sz2oeszj7q@yuggoth.org> References: <20180612195325.lr364w6skajhhtow@yuggoth.org> <20180710185618.gzfzx3sz2oeszj7q@yuggoth.org> Message-ID: Cross posting this to the dev-list as I think there will be good input from there as well :) -Kendall (diablo_rojo) On Tue, Jul 10, 2018 at 11:56 AM Jeremy Stanley wrote: > On 2018-06-12 19:53:25 +0000 (+0000), Jeremy Stanley wrote: > [...] > > Finally, we came up with a handful of action items. One was me > > sending this summary (only a couple weeks late!), another was > > Matthew Oliver submitting a patch to the contributor guide repo > > with our initial stub text. > [...] > > An early draft for the Contributor Guide addition with > recommendations to contributing organizations was subsequently > proposed as https://review.openstack.org/578676 but could use some > additional input and polish from other interested members of the > community. Please have a look and provide any feedback you have as > review comments there or via followup to this thread (whichever is > more convenient). > -- > Jeremy Stanley > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Jul 10 20:08:11 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 10 Jul 2018 16:08:11 -0400 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume In-Reply-To: References: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Message-ID: On Tue, Jul 10, 2018 at 3:28 PM, Martin Chlumsky wrote: > It is the workaround that is right and the discussion part that is wrong. > > I am familiar with this bug. Using thin volumes *and/or* enabling zero > padding DOES ensure data contained > in a volume is actually deleted. > Great, that's super helpful. Thanks! Is there someone (Luke?) on the list that can send a correction for this OSSN to all the lists it needs to go to? // jim > > On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen > wrote: > >> On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: >> >>> Data retained after deletion of a ScaleIO volume >>> --- >>> >>> ### Summary ### >>> Certain storage volume configurations allow newly created volumes to >>> contain previous data. This could lead to leakage of sensitive >>> information between tenants. >>> >>> ### Affected Services / Software ### >>> Cinder releases up to and including Queens with ScaleIO volumes >>> using thin volumes and zero padding. >>> >> >> According to discussion in the bug, this bug occurs with ScaleIO volumes >> using thick volumes and with zero padding disabled. >> >> If the bug is with thin volumes and zero padding, then the workaround >> seems quite wrong. :) >> >> I'm not super familiar with Cinder, so could some Cinder folks check this >> out and re-issue a more accurate OSSN, please? >> >> // jim >> >> >>> >>> ### Discussion ### >>> Using both thin volumes and zero padding does not ensure data contained >>> in a volume is actually deleted. The default volume provisioning rule is >>> set to thick so most installations are likely not affected. Operators >>> can check their configuration in `cinder.conf` or check for zero padding >>> with this command `scli --query_all`. >>> >>> #### Recommended Actions #### >>> >>> Operators can use the following two workarounds, until the release of >>> Rocky (planned 30th August 2018) which resolves the issue. >>> >>> 1. Swap to thin volumes >>> >>> 2. Ensure ScaleIO storage pools use zero-padding with: >>> >>> `scli --modify_zero_padding_policy >>> (((--protection_domain_id | >>> --protection_domain_name ) >>> --storage_pool_name ) | --storage_pool_id ) >>> (--enable_zero_padding | --disable_zero_padding)` >>> >>> ### Contacts / References ### >>> Author: Nick Tait >>> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 >>> Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 >>> Mailing List : [Security] tag on openstack-dev at lists.openstack.org >>> OpenStack Security Project : https://launchpad.net/~openstack-ossg >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Tue Jul 10 20:12:27 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Tue, 10 Jul 2018 17:12:27 -0300 Subject: [openstack-dev] [cinder] Planning Etherpad for Denver 2018 PTG In-Reply-To: <80f839e7-36a4-55ca-7c01-9795e5fcf28a@gmail.com> References: <80f839e7-36a4-55ca-7c01-9795e5fcf28a@gmail.com> Message-ID: Thanks Jay! Em sex, 6 de jul de 2018 às 14:30, Jay S Bryant escreveu: > All, > > I have created an etherpad to start planning for the Denver PTG in > September. [1] Please start adding topics to the etherpad. > > Look forward to seeing you all there! > > Jay > > (jungleboyj) > > [1] https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumitnaiksatam at gmail.com Tue Jul 10 20:24:25 2018 From: sumitnaiksatam at gmail.com (Sumit Naiksatam) Date: Tue, 10 Jul 2018 13:24:25 -0700 Subject: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed In-Reply-To: <1531239660-sup-276@lrrr.local> References: <1531239660-sup-276@lrrr.local> Message-ID: Thanks Doug for noticing this. I am guessing this was a transient issue. How do we trigger this job again to confirm? On Tue, Jul 10, 2018 at 9:21 AM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-07-10 06:38:24 +0000: >> Build failed. >> >> - release-openstack-python http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ : FAILURE in 6m 31s >> - announce-release announce-release : SKIPPED >> - propose-update-constraints propose-update-constraints : SKIPPED >> > > The release job failed trying to pip install something due to an SSL > error. > > http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mjturek at linux.vnet.ibm.com Tue Jul 10 20:31:48 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Tue, 10 Jul 2018 16:31:48 -0400 Subject: [openstack-dev] [ironic] Ironic Bug Day July 12 2018 1:00 - 2:00 PM UTC Message-ID: <59189c21-797e-1777-1858-6019353889cb@linux.vnet.ibm.com> Hey all, This month's bug day was delayed a week and will take place on Thursday the 12th from 1:00 UTC to 2:00 UTC For location, time, and agenda details please see https://etherpad.openstack.org/p/ironic-bug-day-july-2018 If you would like to propose topics, feel free to do it in the etherpad! Thanks, Mike Turek From doug at doughellmann.com Tue Jul 10 21:47:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 10 Jul 2018 17:47:25 -0400 Subject: [openstack-dev] [Release-job-failures][group-based-policy] Release of openstack/group-based-policy failed In-Reply-To: References: <1531239660-sup-276@lrrr.local> Message-ID: <1531259220-sup-4165@lrrr.local> Excerpts from Sumit Naiksatam's message of 2018-07-10 13:24:25 -0700: > Thanks Doug for noticing this. I am guessing this was a transient > issue. How do we trigger this job again to confirm? Someone from the infra team with access to the zuul interface can help you with that. > > On Tue, Jul 10, 2018 at 9:21 AM, Doug Hellmann wrote: > > Excerpts from zuul's message of 2018-07-10 06:38:24 +0000: > >> Build failed. > >> > >> - release-openstack-python http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/ : FAILURE in 6m 31s > >> - announce-release announce-release : SKIPPED > >> - propose-update-constraints propose-update-constraints : SKIPPED > >> > > > > The release job failed trying to pip install something due to an SSL > > error. > > > > http://logs.openstack.org/5b/5bbcfd7b41d39339ff9b9f8654681406d2508205/release/release-openstack-python/269f8ce/job-output.txt.gz#_2018-07-10_06_37_26_065386 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Tue Jul 10 22:05:34 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 10 Jul 2018 17:05:34 -0500 Subject: [openstack-dev] [keystone] Adding Wangxiyuan to keystone core Message-ID: Hi all, Today we added Wangxiyuan to the keystone core team [0]. He's been doing a bunch of great work over the last couple releases and has become a valuable reviewer [1][2]. He's also been instrumental in pushing forward the unified limits work not only in keystone, but across projects. Thanks Wangxiyuan for all your help and welcome to the team! Lance [0] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-07-10-16.00.log.html#l-100 [1] http://stackalytics.com/?module=keystone-group [2] http://stackalytics.com/?module=keystone-group&release=queens -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From johnsomor at gmail.com Tue Jul 10 23:47:32 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 10 Jul 2018 16:47:32 -0700 Subject: [openstack-dev] [requirements][taskflow] networkx migration In-Reply-To: <1531240097-sup-9184@lrrr.local> References: <20180709201523.y3qhroncve5vqmu7@gentoo.org> <20180710155933.g362uzftzixcdpcy@gentoo.org> <1531240097-sup-9184@lrrr.local> Message-ID: Octavia passed tempest with this change and networkx 2.1. Michael On Tue, Jul 10, 2018 at 9:29 AM Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-07-10 10:59:33 -0500: > > On 18-07-09 15:15:23, Matthew Thode wrote: > > > We have a patch that looks good, can we get it merged? > > > > > > https://review.openstack.org/#/c/577833/ > > > > > > > Anyone from taskflow around? Maybe it's better to just mail the ptl. > > > > We could use more reviewers on taskflow (and have needed them for a > while). Perhaps we can get some of the consuming projects to give it a > little live so the Oslo folks who are less familiar with it feel > confident of the change(s) this close to the final release date for > non-client libraries. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Tue Jul 10 23:56:20 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Jul 2018 19:56:20 -0400 Subject: [openstack-dev] Plan to switch the undercloud to be containerized by default Message-ID: This is an update on where things are regarding $topic, based on feedback I've got from the work done recently: 1) Switch --use-heat to take a boolean and deprecate it We still want to allow users to deploy non containerized underclouds, so we made this patch so they can use --use-heat=False: https://review.openstack.org/#/c/581467/ Also https://review.openstack.org/#/c/581468 and https://review.openstack.org/581180 as dependencies 2) Configure CI jobs for containerized undercloud, except scenario001, 002 for timeout reasons (and figure out this problem in a parallel effort) https://review.openstack.org/#/c/575330 https://review.openstack.org/#/c/579755 3) Switch tripleoclient to deploy by default a containerized undercloud https://review.openstack.org/576218 4) Improve performances in general so scenario001/002 doesn't timeout when containerized undercloud is enabled https://review.openstack.org/#/c/581183 is the patch that'll enable the containerized undercloud https://review.openstack.org/#/c/577889/ is a patch that enables pipelining in ansible/quickstart, but more is about to come, I'll update the patches tonight. 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to disable containers, and keep coverage for non containerized undercloud) https://review.openstack.org/#/c/581534/ Reviews are welcome, we aim to merge this work by milestone 3, in less than 2 weeks from now. Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jul 10 23:57:11 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Jul 2018 19:57:11 -0400 Subject: [openstack-dev] [tripleo] Plan to switch the undercloud to be containerized by default In-Reply-To: References: Message-ID: with [tripleo] tag... On Tue, Jul 10, 2018 at 7:56 PM Emilien Macchi wrote: > This is an update on where things are regarding $topic, based on feedback > I've got from the work done recently: > > 1) Switch --use-heat to take a boolean and deprecate it > > We still want to allow users to deploy non containerized underclouds, so > we made this patch so they can use --use-heat=False: > https://review.openstack.org/#/c/581467/ > Also https://review.openstack.org/#/c/581468 and > https://review.openstack.org/581180 as dependencies > > 2) Configure CI jobs for containerized undercloud, except scenario001, 002 > for timeout reasons (and figure out this problem in a parallel effort) > > https://review.openstack.org/#/c/575330 > https://review.openstack.org/#/c/579755 > > 3) Switch tripleoclient to deploy by default a containerized undercloud > > https://review.openstack.org/576218 > > 4) Improve performances in general so scenario001/002 doesn't timeout when > containerized undercloud is enabled > > https://review.openstack.org/#/c/581183 is the patch that'll enable the > containerized undercloud > https://review.openstack.org/#/c/577889/ is a patch that enables > pipelining in ansible/quickstart, but more is about to come, I'll update > the patches tonight. > > 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to > disable containers, and keep coverage for non containerized undercloud) > > https://review.openstack.org/#/c/581534/ > > > Reviews are welcome, we aim to merge this work by milestone 3, in less > than 2 weeks from now. > Thanks! > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Jul 11 02:54:21 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 11 Jul 2018 10:54:21 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.07.11 Message-ID: Hi Team, Weekly meeting as usual starting UTC1400 at #openstack-cyborg, since holiday is over, let's focus on getting Rocky features done :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Wed Jul 11 06:20:00 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 11 Jul 2018 07:20:00 +0100 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume In-Reply-To: References: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Message-ID: On Tue, Jul 10, 2018 at 9:08 PM, Jim Rollenhagen wrote: > On Tue, Jul 10, 2018 at 3:28 PM, Martin Chlumsky < > martin.chlumsky at gmail.com> wrote: > >> It is the workaround that is right and the discussion part that is wrong. >> >> I am familiar with this bug. Using thin volumes *and/or* enabling zero >> padding DOES ensure data contained >> in a volume is actually deleted. >> > > Great, that's super helpful. Thanks! > > Is there someone (Luke?) on the list that can send a correction for this > OSSN to all the lists it needs to go to? > > // jim > It can, but I would want to be sure we get an agreed consensus. The note has already gone through a review cycle where a cinder core approved the contents: https://review.openstack.org/#/c/579094/ If someone wants to put forward a patch with the needed amendments , I can send out a correction to the lists. > > >> >> On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen >> wrote: >> >>> On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds wrote: >>> >>>> Data retained after deletion of a ScaleIO volume >>>> --- >>>> >>>> ### Summary ### >>>> Certain storage volume configurations allow newly created volumes to >>>> contain previous data. This could lead to leakage of sensitive >>>> information between tenants. >>>> >>>> ### Affected Services / Software ### >>>> Cinder releases up to and including Queens with ScaleIO volumes >>>> using thin volumes and zero padding. >>>> >>> >>> According to discussion in the bug, this bug occurs with ScaleIO volumes >>> using thick volumes and with zero padding disabled. >>> >>> If the bug is with thin volumes and zero padding, then the workaround >>> seems quite wrong. :) >>> >>> I'm not super familiar with Cinder, so could some Cinder folks check >>> this out and re-issue a more accurate OSSN, please? >>> >>> // jim >>> >>> >>>> >>>> ### Discussion ### >>>> Using both thin volumes and zero padding does not ensure data contained >>>> in a volume is actually deleted. The default volume provisioning rule is >>>> set to thick so most installations are likely not affected. Operators >>>> can check their configuration in `cinder.conf` or check for zero padding >>>> with this command `scli --query_all`. >>>> >>>> #### Recommended Actions #### >>>> >>>> Operators can use the following two workarounds, until the release of >>>> Rocky (planned 30th August 2018) which resolves the issue. >>>> >>>> 1. Swap to thin volumes >>>> >>>> 2. Ensure ScaleIO storage pools use zero-padding with: >>>> >>>> `scli --modify_zero_padding_policy >>>> (((--protection_domain_id | >>>> --protection_domain_name ) >>>> --storage_pool_name ) | --storage_pool_id ) >>>> (--enable_zero_padding | --disable_zero_padding)` >>>> >>>> ### Contacts / References ### >>>> Author: Nick Tait >>>> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0084 >>>> Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1699573 >>>> Mailing List : [Security] tag on openstack-dev at lists.openstack.org >>>> OpenStack Security Project : https://launchpad.net/~openstack-ossg >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiapei2 at lenovo.com Wed Jul 11 06:41:51 2018 From: jiapei2 at lenovo.com (Pei Pei2 Jia) Date: Wed, 11 Jul 2018 06:41:51 +0000 Subject: [openstack-dev] [zuul][openstack-infra][openstack-third-party-ci] Issues in Zuul v3 when setup 3rd party CI Message-ID: <7155A01359422A448E2E280E0E57B1429CDF8340@CNMAILEX02.lenovo.com> Hello OpenStackers, I'm here to ask for help. I've setup zuul v3, but if fails to pull Ironic or other projects from gerrit. It complains "Host key verification failed". Some people told me that it may caused by permissions, but I've checked that the permission is right. Could anyone help me and have a look of it? The log is here http://paste.openstack.org/show/725525/, you can also login the VPS to investigate. Thank you Jeremy Jia (贾培) Software Developer, Lenovo Cloud Technology Center 5F, Zhangjiang Mansion, 560 SongTao Rd. Pudong, Shanghai jiapei2 at lenovo.com Ph: 8621- Mobile: 8618116119081 www.lenovo.com / www.lenovo.com Forums | Blogs | Twitter | Facebook | Flickr Print only when necessary -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Wed Jul 11 09:18:17 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 11 Jul 2018 05:18:17 -0400 Subject: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD? In-Reply-To: References: Message-ID: <1531300697.4069.23.camel@redhat.com> Lingxian, I don't see any reason not to provide support for other wrapping mechanisms. Have you tried hacking the code to use one of the other wrapping mechanisms to see if it works? Ultimately, what is passed are parameters to CFFI. As long as you pass in the right input and your PKCS#11 library can support it, then there should be no problem. If it works, it makes sense to make the wrapping algorithm configurable for the plugin. It may or may not make sense to store the wrapping algorithm used in the secret plugin-metadata if we want to support migration to other HSMs. Ade On Sat, 2018-07-07 at 12:54 +1200, Lingxian Kong wrote: > Hi Barbican guys, > > Currently, I am testing the integration between Barbican and SoftHSM > v2 but I met with a problem that SoftHSM v2 doesn't > support CKM_AES_CBC_PAD key wrapping operation which is hardcoded in > Barbican code here https://github.com/openstack/barbican/blob/5dea5ce > c130b59ecfb8d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L4 > 96. After discussion with SoftHSM team, I was told SoftHSM does > support other mechanisms such as CKM_AES_KEY_WRAP, > CKM_AES_KEY_WRAP_PAD, CKM_RSA_PKCS, or CKM_RSA_PKCS_OAEP. > > My question is, is it easy to support other wrapping mechanisms in > Barbican? Or if there is another workaround this problem? > > Cheers, > Lingxian Kong > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From witold.bedyk at est.fujitsu.com Wed Jul 11 09:24:13 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 11 Jul 2018 09:24:13 +0000 Subject: [openstack-dev] [monasca] Etharpad for Stein PTG in Denver Message-ID: Hello, I've just created an etherpad [1] for planning our sessions at the next PTG in Denver. Please add the topics you'd like to discuss. The main goal of the sessions is to agree on development priorities and coordinate the work for the next release cycle. Please also don't forget to add yourself to the attendance list, on-site or remote. Cheers Witek [1] https://etherpad.openstack.org/p/monasca-ptg-stein From n_jayshankar at yahoo.com Wed Jul 11 09:50:34 2018 From: n_jayshankar at yahoo.com (jayshankar nair) Date: Wed, 11 Jul 2018 09:50:34 +0000 (UTC) Subject: [openstack-dev] creating instance References: <1178050526.2530473.1531302634013.ref@mail.yahoo.com> Message-ID: <1178050526.2530473.1531302634013@mail.yahoo.com> there are lot of error lines in nova logs. But nothing related to instance creation. I am unable to launch instance. On Tuesday, July 10, 2018 8:34 PM, Chris Friesen wrote: On 07/10/2018 03:04 AM, jayshankar nair wrote: > Hi, > > I  am trying to create an instance of cirros os(Project/Compute/Instances). I am > getting the following error. > > Error: Failed to perform requested operation on instance "cirros1", the instance > has an error status: Please try again later [Error: Build of instance > 5de65e6d-fca6-4e78-a688-ead942e8ed2a aborted: The server has either erred or is > incapable of performing the requested operation. (HTTP 500) (Request-ID: > req-91535564-4caf-4975-8eff-7bca515d414e)]. > > How to debug the error. You'll want to look at the logs for the individual service.  Since you were trying to create a server instance, you probably want to start with the logs for the "nova-api" service to see if there are any failure messages.  You can then check the logs for "nova-scheduler", "nova-conductor", and "nova-compute". There should be something useful in one of those. Chris __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Jul 11 10:48:48 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 11 Jul 2018 22:48:48 +1200 Subject: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD? In-Reply-To: <1531300697.4069.23.camel@redhat.com> References: <1531300697.4069.23.camel@redhat.com> Message-ID: Hi Ade, Thanks for your reply. I just replaced `CKM_AES_CBC_PAD` with `CKM_RSA_PKCS` here[1], of course I defined `CKM_RSA_PKCS = 0x00000001` in the code, but still got the following error: *Jul 11 10:42:05 barbican-devstack devstack at barbican-svc.service[19897]: 2018-07-11 10:42:05.309 19900 WARNING barbican.plugin.crypto.p11_crypto [req-f2d27105-4811-4c77-a321-2ac1399cc9d2 b268f84aef814ae* *da17ad3fa38e0049d 7abe0e02baec4df2b6046d7ef7f44998 - default default] Reinitializing PKCS#11 library: HSM returned response code: 0x7L CKR_ARGUMENTS_BAD: P11CryptoPluginException: HSM returned response code: 0x7L CKR_ARGUMENTS_BAD* ​[1]: https://github.com/openstack/barbican/blob/5dea5cec130b59ecfb8d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L496 ​ Cheers, Lingxian Kong On Wed, Jul 11, 2018 at 9:18 PM, Ade Lee wrote: > Lingxian, > > I don't see any reason not to provide support for other wrapping > mechanisms. > > Have you tried hacking the code to use one of the other wrapping > mechanisms to see if it works? Ultimately, what is passed are > parameters to CFFI. As long as you pass in the right input and your > PKCS#11 library can support it, then there should be no problem. > > If it works, it makes sense to make the wrapping algorithm configurable > for the plugin. > > It may or may not make sense to store the wrapping algorithm used in > the secret plugin-metadata if we want to support migration to other > HSMs. > > Ade -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Jul 11 10:59:40 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 11 Jul 2018 22:59:40 +1200 Subject: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD? In-Reply-To: References: <1531300697.4069.23.camel@redhat.com> Message-ID: BTW, i am using `CKM_RSA_PKCS` because it's the only one of the suggested mechanisms that SoftHSM supports according to the output of `pkcs11-tool --module libsofthsm2.so ---slot $slot --list-mechanisms`. *$ pkcs11-tool --module libsofthsm2.so ---slot $slot --list-mechanisms* *...* *RSA-PKCS, keySize={512,16384}, encrypt, decrypt, sign, verify, wrap, unwrap* *...* Cheers, Lingxian Kong On Wed, Jul 11, 2018 at 10:48 PM, Lingxian Kong wrote: > Hi Ade, > > Thanks for your reply. > > I just replaced `CKM_AES_CBC_PAD` with `CKM_RSA_PKCS` here[1], of course I > defined `CKM_RSA_PKCS = 0x00000001` in the code, but still got the > following error: > > *Jul 11 10:42:05 barbican-devstack devstack at barbican-svc.service[19897]: > 2018-07-11 10:42:05.309 19900 WARNING barbican.plugin.crypto.p11_crypto > [req-f2d27105-4811-4c77-a321-2ac1399cc9d2 b268f84aef814ae* > *da17ad3fa38e0049d 7abe0e02baec4df2b6046d7ef7f44998 - default default] > Reinitializing PKCS#11 library: HSM returned response code: 0x7L > CKR_ARGUMENTS_BAD: P11CryptoPluginException: HSM returned response code: > 0x7L CKR_ARGUMENTS_BAD* > > ​[1]: https://github.com/openstack/barbican/blob/ > 5dea5cec130b59ecfb8d46435cd7eb3212894b4c/barbican/plugin/ > crypto/pkcs11.py#L496​ > > > Cheers, > Lingxian Kong > > On Wed, Jul 11, 2018 at 9:18 PM, Ade Lee wrote: > >> Lingxian, >> >> I don't see any reason not to provide support for other wrapping >> mechanisms. >> >> Have you tried hacking the code to use one of the other wrapping >> mechanisms to see if it works? Ultimately, what is passed are >> parameters to CFFI. As long as you pass in the right input and your >> PKCS#11 library can support it, then there should be no problem. >> >> If it works, it makes sense to make the wrapping algorithm configurable >> for the plugin. >> >> It may or may not make sense to store the wrapping algorithm used in >> the secret plugin-metadata if we want to support migration to other >> HSMs. >> >> Ade > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhuin at redhat.com Wed Jul 11 12:49:22 2018 From: mhuin at redhat.com (Matthieu Huin) Date: Wed, 11 Jul 2018 14:49:22 +0200 Subject: [openstack-dev] [zuul][openstack-infra][openstack-third-party-ci] Issues in Zuul v3 when setup 3rd party CI In-Reply-To: <7155A01359422A448E2E280E0E57B1429CDF8340@CNMAILEX02.lenovo.com> References: <7155A01359422A448E2E280E0E57B1429CDF8340@CNMAILEX02.lenovo.com> Message-ID: Hello, Have you tried running the failing command line manually? ie git clone ssh://lenovo_lxca_ci at review.openstack.org:29418/openstack/ironic /var/lib/zuul/executor-git/review.openstack.org/openstack/ironic (just replace the last argument by any path of your choosing) Make sure to specify the private key you set up to authenticate on review.openstack.org, for example using this: https://gist.github.com/gskielian/b3f165b9a25c79f82105 This should help you pinpoint the problem. MHU On Wed, Jul 11, 2018 at 8:41 AM, Pei Pei2 Jia wrote: > Hello OpenStackers, > > > > I'm here to ask for help. I've setup zuul v3, but if fails to pull Ironic > or other projects from gerrit. It complains "Host key verification failed". > Some people told me that it may caused by permissions, but I've checked > that the permission is right. Could anyone help me and have a look of it? > The log is here http://paste.openstack.org/show/725525/, you can also > login the VPS to investigate. > > > > > > Thank you > > > > > > > > > > *Jeremy Jia (**贾培**)* > Software Developer, Lenovo Cloud Technology Center > 5F, Zhangjiang Mansion, 560 SongTao Rd. Pudong, Shanghai > > > jiapei2 at lenovo.com > Ph: 8621- > Mobile: 8618116119081 > > > www.lenovo.com / www.lenovo.com > Forums | Blogs | > Twitter | Facebook > | Flickr > > Print only when necessary > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Wed Jul 11 13:24:06 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Wed, 11 Jul 2018 09:24:06 -0400 Subject: [openstack-dev] [keystone] Adding Wangxiyuan to keystone core In-Reply-To: References: Message-ID: On Tue, Jul 10, 2018 at 6:06 PM Lance Bragstad wrote: > > Hi all, > > Today we added Wangxiyuan to the keystone core team [0]. He's been doing > a bunch of great work over the last couple releases and has become a > valuable reviewer [1][2]. He's also been instrumental in pushing forward > the unified limits work not only in keystone, but across projects. > > Thanks Wangxiyuan for all your help and welcome to the team! > +1 well deserved! > Lance > > [0] > http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-07-10-16.00.log.html#l-100 > [1] http://stackalytics.com/?module=keystone-group > [2] http://stackalytics.com/?module=keystone-group&release=queens > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Harry From juliaashleykreger at gmail.com Wed Jul 11 13:52:51 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 11 Jul 2018 09:52:51 -0400 Subject: [openstack-dev] [ironic] Correction: "mid-cycle" call Tuesday, July 17th - 12:00 PM UTC Message-ID: Greetings everyone! In my rush to get the email sent, I somehow put the wrong time on the email. The correct date and time is Tuesday, July 17th, at 12:00 UTC. -Julia On Tue, Jul 10, 2018 at 12:28 PM, Julia Kreger wrote: > Fellow ironicans! > > Lend me your ears! With the cycle quickly coming to a close, we > wanted to take a couple hours for high bandwidth discussions covering > the end of cycle for Ironic, as well as any items that need to be > established in advance of the PTG. > > We're going to use bluejeans[1] since it seems to work well for > everyone, and I've posted a rough agenda[2] to an etherpad. If there > are additional items, please feel free to add them to the etherpad. > > -Julia > > [1]: https://bluejeans.com/437242882/ > [2]: https://etherpad.openstack.org/p/ironic-rocky-midcycle From bodenvmw at gmail.com Wed Jul 11 14:39:29 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 11 Jul 2018 08:39:29 -0600 Subject: [openstack-dev] [neutron] Finalizing neutron-lib release for Rocky Message-ID: Howdy, We need to have a final release of neutron-lib for Rocky by July 19th, so we should probably propose a neutron-lib 1.18.0 release early next week. To help focus our review efforts between now and then I'd like to ask folks to tag any neutron-lib patches they deem necessary for Rocky with the string "rocky-candidate" (just add a comment with "rocky-candidate" to the patch). This will allow us to use a query [1] to focus our efforts in this regard. Please keep in mind that API reference patches don't fall into this category as they are not based on pypi releases (IIUC). If you have any questions please feel free to reach out on the #openstack-neutron channel. Cheers [1] https://review.openstack.org/#/q/project:openstack/neutron-lib+comment:rocky-candidate From Louie.Kwan at windriver.com Wed Jul 11 16:22:14 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Wed, 11 Jul 2018 16:22:14 +0000 Subject: [openstack-dev] FW: Change in openstack/masakari-monitors[master]: Introspective Instance Monitoring through QEMU Guest Agent In-Reply-To: References: Message-ID: <47EFB32CD8770A4D9590812EE28C977E9637DD0E@ALA-MBD.corp.ad.wrs.com> Thanks again Tushar and Adam for reviewing 534958. Anything else to get Workflow to +1? Thanks. Louie -----Original Message----- From: Tushar Patil (Code Review) [mailto:review at openstack.org] Sent: Tuesday, July 10, 2018 10:21 PM To: Kwan, Louie Cc: Friesen, Chris; Tim Bell; Waines, Greg; Li Yingjun; Sampath Priyankara (samP); wangqiang-bj; Jean-Philippe Evrard; Young, Ken; Tushar Patil; Andrew Beekhof; Abhishek Kekane; zhangyangyang; takahara.kengo; Rikimaru Honjo; Dinesh Bhor; Michele Baldessari; Adam Spiers Subject: Change in openstack/masakari-monitors[master]: Introspective Instance Monitoring through QEMU Guest Agent Tushar Patil has posted comments on this change. ( https://review.openstack.org/534958 ) Change subject: Introspective Instance Monitoring through QEMU Guest Agent ...................................................................... Patch Set 9: Code-Review+2 LGTM. Thank you. -- To view, visit https://review.openstack.org/534958 To unsubscribe, visit https://review.openstack.org/settings Gerrit-MessageType: comment Gerrit-Change-Id: I9efc6afc8d476003d3aa7fee8c31bcaa65438674 Gerrit-PatchSet: 9 Gerrit-Project: openstack/masakari-monitors Gerrit-Branch: master Gerrit-Owner: Louie Kwan Gerrit-Reviewer: Abhishek Kekane Gerrit-Reviewer: Adam Spiers Gerrit-Reviewer: Andrew Beekhof Gerrit-Reviewer: Chris Friesen Gerrit-Reviewer: Dinesh Bhor Gerrit-Reviewer: Greg Waines Gerrit-Reviewer: Hieu LE Gerrit-Reviewer: Jean-Philippe Evrard Gerrit-Reviewer: Ken Young Gerrit-Reviewer: Li Yingjun Gerrit-Reviewer: Louie Kwan Gerrit-Reviewer: Michele Baldessari Gerrit-Reviewer: Rikimaru Honjo Gerrit-Reviewer: Sampath Priyankara (samP) Gerrit-Reviewer: Tim Bell Gerrit-Reviewer: Tushar Patil Gerrit-Reviewer: Tushar Patil Gerrit-Reviewer: Zuul Gerrit-Reviewer: takahara.kengo Gerrit-Reviewer: wangqiang-bj Gerrit-Reviewer: zhangyangyang Gerrit-HasComments: No -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Jul 11 16:39:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 11 Jul 2018 10:39:30 -0600 Subject: [openstack-dev] [tripleo] Rocky blueprints Message-ID: Hello everyone, As milestone 3 is quickly approaching, it's time to review the open blueprints[0] and their status. It appears that we have made good progress on implementing significant functionality this cycle but we still have some open items. Below is the list of blueprints that are still open so we'll want to see if they will make M3 and if not, we'd like to move them out to Stein and they won't make Rocky without an FFE. Currently not marked implemented but without any open patches (likely implemented): - https://blueprints.launchpad.net/tripleo/+spec/major-upgrade-workflow - https://blueprints.launchpad.net/tripleo/+spec/tripleo-predictable-ctlplane-ips Currently open with pending patches (may need FFE): - https://blueprints.launchpad.net/tripleo/+spec/config-download-ui - https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow - https://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud - https://blueprints.launchpad.net/tripleo/+spec/bluestore - https://blueprints.launchpad.net/tripleo/+spec/gui-node-discovery-by-range - https://blueprints.launchpad.net/tripleo/+spec/multiarch-support - https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-templates - https://blueprints.launchpad.net/tripleo/+spec/sriov-vfs-as-network-interface - https://blueprints.launchpad.net/tripleo/+spec/custom-validations Currently open without work (should be moved to Stein): - https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing - https://blueprints.launchpad.net/tripleo/+spec/plan-from-git-in-gui - https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-react-walkthrough - https://blueprints.launchpad.net/tripleo/+spec/wrapping-workflow-for-node-operations - https://blueprints.launchpad.net/tripleo/+spec/ironic-overcloud-ci Please take some time to review this list and update it. If you think you are close to finishing out the feature and would like to request an FFE please start getting that together with appropriate details and justifications for the FFE. Thanks, -Alex [0] https://blueprints.launchpad.net/tripleo/rocky From melwittt at gmail.com Wed Jul 11 17:40:19 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 11 Jul 2018 10:40:19 -0700 Subject: [openstack-dev] [nova] Rocky blueprint status tracking In-Reply-To: References: Message-ID: On Fri, 15 Jun 2018 16:12:21 -0500, Matt Riedemann wrote: > On 6/15/2018 11:23 AM, melanie witt wrote: >> Similar to last cycle, we have an etherpad for tracking the status of >> approved nova blueprints for Rocky here: >> >> https://etherpad.openstack.org/p/nova-rocky-blueprint-status >> >> that we can use to help us review patches. If I've missed any blueprints >> or if anything needs an update, please add a note on the etherpad and >> we'll get it sorted. > > Thanks for doing this, I find it very useful to get an overall picture > of where we're sitting in the final milestone. Milestone r-3 (feature freeze) is just around the corner July 26 and I've refreshed the status tracking etherpad, mostly because some of the wayward blueprints are now ready for review. There are 3 blueprints which have only one patch left to merge before they're complete. Please check out the etherpad and use it as a guide for your reviews, so we can complete as many blueprints as we can before FF. And please add notes or move/add blueprints that I might have missed. Thanks all, -melanie From melwittt at gmail.com Wed Jul 11 17:43:53 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 11 Jul 2018 10:43:53 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: <8f43b948-921b-d608-c23e-0e8d91ca2540@gmail.com> Howdy everyone, Here is the current review runway [1] status for blueprints in runways. Milestone r-3 (feature freeze) is coming up soon July 26, so this will be the last runway before FF unless we can complete some earlier than their end dates. * Allow abort live migrations in queued status https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status (Kevin Zheng) [END DATE: 2018-07-25] starts here https://review.openstack.org/563505 * Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-07-25] starts here https://review.openstack.org/523387 * Support traits in Glance https://blueprints.launchpad.net/nova/+spec/glance-image-traits (arvindn05) [END DATE: 2018-07-25] last patch https://review.openstack.org/569498 Best, -melanie [1] https://etherpad.openstack.org/p/nova-runways-rocky From melwittt at gmail.com Wed Jul 11 18:49:23 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 11 Jul 2018 11:49:23 -0700 Subject: [openstack-dev] [nova] Denver Stein ptg planning Message-ID: <60144508-c601-95f8-1b39-3b5287b2ff76@gmail.com> Hello Devs and Ops, I've created an etherpad where we can start collecting ideas for topics to cover at the Stein PTG. Please feel free to add your comments and topics with your IRC nick next to it to make it easier to discuss with you. https://etherpad.openstack.org/p/nova-ptg-stein Cheers, -melanie From lbragstad at gmail.com Wed Jul 11 19:01:43 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 11 Jul 2018 14:01:43 -0500 Subject: [openstack-dev] [keystone] Stein PTG Planning Etherpad Message-ID: It's getting to be that time of the release (and I'm seeing other etherpads popping up on the mailing list). I've created one specifically for keystone [0]. Same drill as the last two PTGs. We'll start by just getting topics written down and I'll group similar topics into buckets prior to building a somewhat official schedule. Please feel free to add topics you'd like to discuss at the PTG. [0] https://etherpad.openstack.org/p/keystone-stein-ptg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ashlee at openstack.org Wed Jul 11 19:34:33 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 11 Jul 2018 14:34:33 -0500 Subject: [openstack-dev] OpenStack Summit Berlin CFP Closes July 17 Message-ID: Hi everyone, The CFP deadline for the OpenStack Summit Berlin is less than one week away, so make sure to submit your talks before July 18 at 6:59am UTC (July 17 at 11:59pm PST). Tracks: • CI/CD • Container Infrastructure • Edge Computing • Hands on Workshops • HPC / GPU / AI • Open Source Community • Private & Hybrid Cloud • Public Cloud • Telecom & NFV SUBMIT HERE Community voting, the first step in building the Summit schedule, will open in mid July. Once community voting concludes, a Programming Committee for each Track will build the schedule. Programming Committees are made up of individuals from many different open source communities working in open infrastructure, in addition to people who have participated in the past. Read the full selection process here . Register for the Summit - Early Bird pricing ends August 21 Become a Sponsor Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Jul 11 20:39:07 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 11 Jul 2018 15:39:07 -0500 Subject: [openstack-dev] [keystone] Federation testing Message-ID: Hi, Within the Edge Computing Group we have a few people interested in Keystone federation testing starting with general federation and moving to edge specific test cases onwards. In case you are interested in this activity, we are organizing a call for next Thursday to talk about basic testing in OpenStack including identifying tasks and volunteers to complete them. We would like to use the time to clarify questions about Keystone federation capabilities if there’s any. We are also collaborating with the OPNFV Edge Cloud project for advanced test scenarios which we will also discuss on the call. The call details are here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Federation_Testing_Call Please check out the materials on this etherpad prior to the call if you plan to join: https://etherpad.openstack.org/p/ECG_Keystone_Testing Please let me know if you have any questions. Thanks and Best Regards, Ildikó From whayutin at redhat.com Wed Jul 11 22:40:07 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 11 Jul 2018 16:40:07 -0600 Subject: [openstack-dev] [tripleo][ci] PTG Stein topics Message-ID: Greetings, Starting to collect thoughts and comments here, https://etherpad.openstack.org/p/tripleoci-ptg-stein Thanks -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jul 12 01:14:37 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Jul 2018 10:14:37 +0900 Subject: [openstack-dev] [nova]API update week 5-11 Message-ID: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== We had more attendees in this week office hours. What we discussed this week: - Discussion on API related BP. Discussion points are embedded inline with BP weekly progress in next section. - Triage 1 new bug and Alex reviewed one in-progress Planned Features : ============== Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad- https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: Spec is merged. I am in contact with author about code update (sent email last night). If no response till this week, i will push the code update for this BP. 2. Abort live migration in queued state: - https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status - https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) - Weekly Progress: Review is going and it is in nova runway this week. In API office hour, we discussed about doing the compute service version checks on compute.api.py side than on rpc side. Dan has point of doing it on rpc side where migration status can changed to running. We decided to further discussed it on patch. 3. Complex anti-affinity policies: - https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies - https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) - Weekly Progress: Good review progress. In API office hour, we discussed on 2 points- 1. whether request also need to have flat format like response. IMO we need to have flat in both request and response. Yikun need more opinion on that. 2. naming fields to policy_* as we are moving these new fields in flat format. I like to have policy_* for clear understanding of attributes by their name. This is not concluded and alex will give feedback on patch. Discussion is on patch for consensus on naming things. 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: mriedem mentioned in last week status mail that he will continue work on this. 5. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky - Weekly Progress: I did not get chance to push more patches on this. I will target this one before next office hour. 6. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - Spec mriedem mentioned in previous week ML is merged - https://review.openstack.org/#/c/557369/ Bugs: ==== Triage 1 new bug and Alex reviewed one in-progress. I did not do my home work of doing review on in-progress patches (i will accommodate that in next week) This week Bug Progress: Critical: 0->0 High importance: 2->3 By Status: New: 1->0 Confirmed/Triage: 30-> 31 In-progress: 36->36 Incomplete: 4->4 ===== Total: 70->71 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report -gmann From gmann at ghanshyammann.com Thu Jul 12 01:43:28 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Jul 2018 10:43:28 +0900 Subject: [openstack-dev] [qa][ptg] Stein PTG Planning for QA Message-ID: <1648c284c05.d910ee2310724.4551801102098612405@ghanshyammann.com> Hi All, As we are close to Stein PTG Denver, I have prepared the etherpad[1] to collect the PTG topic ideas for QA. Please start adding your item/topic you want to discuss in PTG or comment on proposed topics. Even you are not making to PTG physically, still add your topic which you want us to discuss or progress on. [1] https://etherpad.openstack.org/p/qa-stein-ptg -gmann From zhengzhenyulixi at gmail.com Thu Jul 12 02:03:54 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 12 Jul 2018 10:03:54 +0800 Subject: [openstack-dev] [nova]API update week 5-11 In-Reply-To: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> References: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> Message-ID: > > 2. Abort live migration in queued state: > - > https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status > > - > https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) > > - Weekly Progress: Review is going and it is in nova runway this week. In > API office hour, we discussed about doing the compute service version > checks on compute.api.py side than on rpc side. Dan has point of doing it > on rpc side where migration status can changed to running. We decided to > further discussed it on patch. This is my own defence, Dan's point seems to be that the actual rpc version pin could be set to be lower than the can_send_version even when the service version is new enough, so he thinks doing it in rpc is better. On Thu, Jul 12, 2018 at 9:15 AM Ghanshyam Mann wrote: > Hi All, > > Please find the Nova API highlights of this week. > > Weekly Office Hour: > =============== > We had more attendees in this week office hours. > > What we discussed this week: > - Discussion on API related BP. Discussion points are embedded inline with > BP weekly progress in next section. > - Triage 1 new bug and Alex reviewed one in-progress > > Planned Features : > ============== > Below are the API related features for Rocky cycle. Nova API Sub team will > start reviewing those to give their regular feedback. If anythings missing > there feel free to add those in etherpad- > https://etherpad.openstack.org/p/rocky-nova-priorities-tracking > > 1. Servers Ips non-unique network names : > - > https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names > - Spec Merged > - > https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) > - Weekly Progress: Spec is merged. I am in contact with author about code > update (sent email last night). If no response till this week, i will push > the code update for this BP. > > 2. Abort live migration in queued state: > - > https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status > - > https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) > - Weekly Progress: Review is going and it is in nova runway this week. In > API office hour, we discussed about doing the compute service version > checks on compute.api.py side than on rpc side. Dan has point of doing it > on rpc side where migration status can changed to running. We decided to > further discussed it on patch. > > 3. Complex anti-affinity policies: > - > https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies > - > https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) > - Weekly Progress: Good review progress. In API office hour, we discussed > on 2 points- > 1. whether request also need to have flat format like response. IMO > we need to have flat in both request and response. Yikun need more opinion > on that. > > 2. naming fields to policy_* as we are moving these new fields in > flat format. I like to have policy_* for clear understanding of attributes > by their name. This is not concluded > and alex will give feedback on patch. > Discussion is on patch for consensus on naming things. > > 4. Volume multiattach enhancements: > - > https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements > - > https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) > - Weekly Progress: mriedem mentioned in last week status mail that he will > continue work on this. > > 5. API Extensions merge work > - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky > - > https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky > - Weekly Progress: I did not get chance to push more patches on this. I > will target this one before next office hour. > > 6. Handling a down cell > - https://blueprints.launchpad.net/nova/+spec/handling-down-cell > - Spec mriedem mentioned in previous week ML is merged - > https://review.openstack.org/#/c/557369/ > > Bugs: > ==== > Triage 1 new bug and Alex reviewed one in-progress. I did not do my home > work of doing review on in-progress patches (i will accommodate that in > next week) > > This week Bug Progress: > Critical: 0->0 > High importance: 2->3 > By Status: > New: 1->0 > Confirmed/Triage: 30-> 31 > In-progress: 36->36 > Incomplete: 4->4 > ===== > Total: 70->71 > > NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', > those are not in above list. Tag such bugs so that we can keep our eyes. > > Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report > > -gmann > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Thu Jul 12 04:01:21 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Thu, 12 Jul 2018 16:01:21 +1200 Subject: [openstack-dev] [publiccloud-wg] [adjutant] Input on Adjutant's official project status Message-ID: Hello fellow public cloud providers (and others)! Adjutant is in the process of being voted in (or not) as an official project as part of OpenStack, but to help over the last few hurdles, some input from the people who would likely benefit the most directly from such a service existing would really be useful. In the past you've probably talked to me about the need for some form of business logic related APIs and services in OpenStack (signup, account termination, project/user management, billing details management, etc). In that space I've been trying to push Adjutant as a solution, not because it's the perfect solution, but because we are trying to keep the service as a cloud agnostic solution that could be tweaked for the unique requirements of various clouds. It's also a place were we can collaborate on these often rather miscellaneous business logic requirements rather than us each writing our own entirely distinct thing and wasting time and effort reinventing the wheel again and again. The review in question where this discussion has been happening for a while: https://review.openstack.org/#/c/553643/ And if you don't know much about Adjutant, here is a little background. The current mission statement is: "To provide an extensible API framework for exposing to users an organization's automated business processes relating to account management across OpenStack and external systems, that can be adapted to the unique requirements of an organization's processes." The docs: https://adjutant.readthedocs.io/en/latest/ The code: https://github.com/openstack/adjutant And here is a rough feature list that was put together as part of the review process for official project status: https://etherpad.openstack.org/p/Adjutant_Features If you have any questions about the service, don't hesitate to get in touch, but some input on the current discussion would be very welcome! Cheers, Adrian Turjak -------------- next part -------------- A non-text attachment was scrubbed... Name: pEpkey.asc Type: application/pgp-keys Size: 1769 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Jul 12 04:34:57 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 12 Jul 2018 14:34:57 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions Message-ID: <20180712043457.GA22285@thor.bakeyournoodle.com> Hi Folks, We have a pit of a problem in openstack/requirements and I'd liek to chat about it. Currently when we generate constraints we create a venv for each (system) python supplied on the command line, install all of global-requirements into that venv and capture the pip freeze. Where this falls down is if we want to generate a freeze for python 3.4 and 3.5 we need an image that has both of those. We cheated and just 'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice versa. This kinda worked for a while but it has drawbacks. I can see a few of options: 1. Build pythons from source and use that to construct the venv [please no] 2. Generate the constraints in an F28 image. My F28 has ample python versions: - /usr/bin/python2.6 - /usr/bin/python2.7 - /usr/bin/python3.3 - /usr/bin/python3.4 - /usr/bin/python3.5 - /usr/bin/python3.6 - /usr/bin/python3.7 I don't know how valid this still is but in the past fedora images have been seen as unstable and hard to keep current. If that isn't still the feeling then we could go down this path. Currently there a few minor problems with bindep.txt on fedora and generate-constraints doesn't work with py3 but these are pretty minor really. 3. Use docker images for python and generate the constraints with them. I've hacked up something we could use as a base for that in: https://review.openstack.org/581948 There are lots of open questions: - How do we make this nodepool/cloud provider friendly ? * Currently the containers just talk to the main debian mirrors. Do we have debian packages? If so we could just do sed magic. - Do/Can we run a registry per provider? - Can we generate and caches these images and only run pip install -U g-r to speed up the build - Are we okay with using docker this way? I like #2 the most but I wanted to seek wider feedback. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tdecacqu at redhat.com Thu Jul 12 04:54:38 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 12 Jul 2018 04:54:38 +0000 Subject: [openstack-dev] [all] log-classify project update (anomaly detection in CI/CD logs) In-Reply-To: <1530780669.k1udih7bo7.tristanC@fedora> References: <1530601298.luby16yqut.tristanC@fedora> <1530780669.k1udih7bo7.tristanC@fedora> Message-ID: <1531370791.r4nn3973qm.tristanC@fedora> On July 5, 2018 9:17 am, Tristan Cacqueray wrote: > On July 3, 2018 7:39 am, Tristan Cacqueray wrote: > [...] >> There is a lot to do and it will be challening. To that effect, I would >> like to propose an initial meeting with all interested parties. >> Please register your irc name and timezone in this etherpad: >> >> https://etherpad.openstack.org/p/log-classify >> > So far, the mean timezone is UTC+1.75, I've added date proposal from the > 16th to the 20th of July. Please adds a '+' to the one you can attend. > I'll follow-up next week with an ical file for the most popular. > Wednesday 18 July at 12:00 UTC has the most votes. There is now a #log-classify channel on Freenode. And I also started an infra-spec draft here: https://review.openstack.org/#/c/581214/1/specs/log_classify.rst See you then. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: log-classify.ics Type: text/calendar Size: 302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From yamamoto at midokura.com Thu Jul 12 06:53:22 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Thu, 12 Jul 2018 15:53:22 +0900 Subject: [openstack-dev] [neutron] Stable review Message-ID: hi, queens branch of networking-midonet has had no changes merged since its creation. the following commit would tell you how many gate blockers have been accumulated. https://review.openstack.org/#/c/572242/ it seems the stable team doesn't have a bandwidth to review subprojects in a timely manner. i'm afraid that we need some policy changes. From tony at bakeyournoodle.com Thu Jul 12 06:53:51 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 12 Jul 2018 16:53:51 +1000 Subject: [openstack-dev] [requirements][storyboard] Updates between SB and LP Message-ID: <20180712065350.GB22285@thor.bakeyournoodle.com> Hi all, The requirements team is only a light user of Launchpad and we're looking at moving to StoryBoard as it looks like for the most part it'll be a better fit. To date the thing that has stopped us doing this is the handling of bugs/stories that are shared between LP and SB. Assume that requirements had migrated to SB, how would be deal with bugs like: https://bugs.launchpad.net/openstack-requirements/+bug/1753969 Is there a, supportable, bi-directional path between SB and LP? I suspect the answer is No. I imagine if we only wanted to get updates from LP reflected in our SB story we could just leave the bug tracker open on LP and run the migration tool "often". Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Jul 12 09:13:09 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 12 Jul 2018 19:13:09 +1000 Subject: [openstack-dev] [neutron] Stable review In-Reply-To: References: Message-ID: <20180712091309.GC22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 03:53:22PM +0900, Takashi Yamamoto wrote: > hi, > > queens branch of networking-midonet has had no changes merged since > its creation. > the following commit would tell you how many gate blockers have been > accumulated. > https://review.openstack.org/#/c/572242/ > > it seems the stable team doesn't have a bandwidth to review subprojects > in a timely manner. The project specific stable team is responsible for reviewing those changes. The global stable team will review project specific changes if they're requested to. I'll treat this email as such a request. Please ask a member of neutron-stable-maint[1] to take a look at your review. > i'm afraid that we need some policy changes. No we need more contributors to stable and extended maintenance periods. This is not a new problem, and one we're trying to correct. Yours Tony. [1] https://review.openstack.org/#/admin/groups/539,members -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ltomasbo at redhat.com Thu Jul 12 09:31:45 2018 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Thu, 12 Jul 2018 11:31:45 +0200 Subject: [openstack-dev] [kuryr] Namespace isolation options Message-ID: <01ce0687-ce2f-558b-141e-e4485574e8a1@redhat.com> Hi folks, I'm working on the kuryr-kubernetes namespace feature to enable isolation between the different namespaces, i.e., pods on namespace A cannot 'talk' to pods or services on namespace B. For the pods isolation, there is already a patch working: https://review.openstack.org/#/c/579181 However, for the services is a bit more complex. There is some initial work on: https://review.openstack.org/#/c/581421 The above patch ensures isolation between services by modifying the security group associated to the loadbalancer VM to only allow traffic from ports with a given security group, in our case the one associated to the namespace. However, it is missing how to handle special cases, such as route and services of LoadBalancer type. For the LoadBalancer type we have two option: 1) When the service is of LoadBalancer type not modify the security group associated to it as it is meant to be accessible from outsite. This basically is the out of the box behaviour of octavia. Pros: it is simple to implement and does not require any extra information. Cons: the svc can be accessed not only on the FIP, but also on the VIP. 2) Add a new security group rule also enabling the traffic from the public-subnet CIDR. Pros: It will not enable access from the VIP, only from the FIP. Cons: it either needs admin rights to get the public-subnet CIDR or a new config option where we specify it. Any preferences? I already tested option 1) and will update the patch set with it shortly, but if option 2) is preferred, I will of course update the PS accordingly. Thanks! Best regards, Luis -- LUIS TOMÁS BOLÍVAR SENIOR SOFTWARE ENGINEER Red Hat Madrid, Spain ltomasbo at redhat.com From yamamoto at midokura.com Thu Jul 12 10:10:09 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Thu, 12 Jul 2018 19:10:09 +0900 Subject: [openstack-dev] [neutron] Stable review In-Reply-To: <20180712091309.GC22285@thor.bakeyournoodle.com> References: <20180712091309.GC22285@thor.bakeyournoodle.com> Message-ID: hi, On Thu, Jul 12, 2018 at 6:13 PM, Tony Breeds wrote: > On Thu, Jul 12, 2018 at 03:53:22PM +0900, Takashi Yamamoto wrote: >> hi, >> >> queens branch of networking-midonet has had no changes merged since >> its creation. >> the following commit would tell you how many gate blockers have been >> accumulated. >> https://review.openstack.org/#/c/572242/ >> >> it seems the stable team doesn't have a bandwidth to review subprojects >> in a timely manner. > > The project specific stable team is responsible for reviewing those > changes. The global stable team will review project specific changes > if they're requested to. I'll treat this email as such a request. > > Please ask a member of neutron-stable-maint[1] to take a look at your > review. i was talking about neutron stable team. nothing about the global stable team. sorry if it was confusing. > >> i'm afraid that we need some policy changes. > > No we need more contributors to stable and extended maintenance periods. > This is not a new problem, and one we're trying to correct. actually it is a new problem. at least worse than before. > > Yours Tony. > > [1] https://review.openstack.org/#/admin/groups/539,members > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From alee at redhat.com Thu Jul 12 11:18:14 2018 From: alee at redhat.com (Ade Lee) Date: Thu, 12 Jul 2018 07:18:14 -0400 Subject: [openstack-dev] [barbican] Can we support key wrapping mechanisms other than CKM_AES_CBC_PAD? In-Reply-To: References: <1531300697.4069.23.camel@redhat.com> Message-ID: <1531394294.4069.65.camel@redhat.com> You probably also need to change the parameters being added to the structure to match the chosen padding mechanism. mech = self.ffi.new("CK_MECHANISM *") mech.mechanism = CKM_AES_CBC_PAD iv = self._generate_random(16, session) mech.parameter = iv mech.parameter_len = 16 > > CKR_ARGUMENTS_BAD probably indicates that whats in mech.parameter > > is bad. On Wed, 2018-07-11 at 22:59 +1200, Lingxian Kong wrote: > BTW, i am using `CKM_RSA_PKCS` because it's the only one of the > suggested mechanisms that SoftHSM supports according to the output of > `pkcs11-tool --module libsofthsm2.so ---slot $slot --list- > mechanisms`. > > $ pkcs11-tool --module libsofthsm2.so ---slot $slot --list-mechanisms > ... > RSA-PKCS, keySize={512,16384}, encrypt, decrypt, sign, verify, wrap, > unwrap > ... > > > > > Cheers, > Lingxian Kong > > On Wed, Jul 11, 2018 at 10:48 PM, Lingxian Kong > wrote: > > Hi Ade, > > > > Thanks for your reply. > > > > I just replaced `CKM_AES_CBC_PAD` with `CKM_RSA_PKCS` here[1], of > > course I defined `CKM_RSA_PKCS = 0x00000001` in the code, but still > > got the following error: > > > > Jul 11 10:42:05 barbican-devstack devstack at barbican-svc.service[198 > > 97]: 2018-07-11 10:42:05.309 19900 WARNING > > barbican.plugin.crypto.p11_crypto [req-f2d27105-4811-4c77-a321- > > 2ac1399cc9d2 b268f84aef814ae > > da17ad3fa38e0049d 7abe0e02baec4df2b6046d7ef7f44998 - default > > default] Reinitializing PKCS#11 library: HSM returned response > > code: 0x7L CKR_ARGUMENTS_BAD: P11CryptoPluginException: HSM > > returned response code: 0x7L CKR_ARGUMENTS_BAD > > > > [1]: https://github.com/openstack/barbican/blob/5dea5cec130b59ecfb8 > > d46435cd7eb3212894b4c/barbican/plugin/crypto/pkcs11.py#L496 > > > > > > Cheers, > > Lingxian Kong > > > > On Wed, Jul 11, 2018 at 9:18 PM, Ade Lee wrote: > > > Lingxian, > > > > > > I don't see any reason not to provide support for other wrapping > > > mechanisms. > > > > > > Have you tried hacking the code to use one of the other wrapping > > > mechanisms to see if it works? Ultimately, what is passed are > > > parameters to CFFI. As long as you pass in the right input and > > > your > > > PKCS#11 library can support it, then there should be no problem. > > > > > > If it works, it makes sense to make the wrapping algorithm > > > configurable > > > for the plugin. > > > > > > It may or may not make sense to store the wrapping algorithm used > > > in > > > the secret plugin-metadata if we want to support migration to > > > other > > > HSMs. > > > > > > Ade > > From bodenvmw at gmail.com Thu Jul 12 13:26:52 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 12 Jul 2018 07:26:52 -0600 Subject: [openstack-dev] [neutron] Stable review In-Reply-To: References: <20180712091309.GC22285@thor.bakeyournoodle.com> Message-ID: On 7/12/18 4:10 AM, Takashi Yamamoto wrote: > On Thu, Jul 12, 2018 at 6:13 PM, Tony Breeds wrote: >> >> No we need more contributors to stable and extended maintenance periods. >> This is not a new problem, and one we're trying to correct. > > actually it is a new problem. at least worse than before. > I'm no expert, but wanted to add my $0.02 as a developer who's invested substantial time in trying to keep a different networking project up to date with all the underpinning changes; some of which are noted in your midonet stable/queens patch. IMHO it's not realistic to think an OpenStack project (master or stable) can go without routine maintenance for extended period of time in this day and age; there are just too many dynamic underpinnings. A case in point are the changes required for the Zuul v3 workstream that don't appear to be fully propagated into a number of networking projects yet [1], midonet included. With that in mind I'm not sure we can just point at the neutron stable team; there are community wide initiatives that ultimate drive underpinning changes across many projects. I've found that you either have to invest the time to "keep up", or "die". For reference I've been spending nearly 4 person weeks per release just on such "maintenance" items. It certainly takes time away from functionality that can be delivered per release, but it seems it's just part of the work necessary to keep your project "current". If are wanting to reduce the amount work for projects to "stay current" then IMO it's certainly a bigger issue than neutron. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html From cboylan at sapwetik.org Thu Jul 12 13:37:52 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 12 Jul 2018 06:37:52 -0700 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <20180712043457.GA22285@thor.bakeyournoodle.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> Message-ID: <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote: > Hi Folks, > We have a pit of a problem in openstack/requirements and I'd liek to > chat about it. > > Currently when we generate constraints we create a venv for each > (system) python supplied on the command line, install all of > global-requirements into that venv and capture the pip freeze. > > Where this falls down is if we want to generate a freeze for python 3.4 > and 3.5 we need an image that has both of those. We cheated and just > 'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice > versa. This kinda worked for a while but it has drawbacks. > > I can see a few of options: > > 1. Build pythons from source and use that to construct the venv > [please no] Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can ignore them and focus on 3.5 and forward? We don't build new freeze lists for the stable branches, this is just a concern for master right? > > 2. Generate the constraints in an F28 image. My F28 has ample python > versions: > - /usr/bin/python2.6 > - /usr/bin/python2.7 > - /usr/bin/python3.3 > - /usr/bin/python3.4 > - /usr/bin/python3.5 > - /usr/bin/python3.6 > - /usr/bin/python3.7 > I don't know how valid this still is but in the past fedora images > have been seen as unstable and hard to keep current. If that isn't > still the feeling then we could go down this path. Currently there a > few minor problems with bindep.txt on fedora and generate-constraints > doesn't work with py3 but these are pretty minor really. I think most of the problems with Fedora stability are around bringing up a new Fedora every 6 months or so. They tend to change sufficiently within that time period to make this a fairly involved exercise. But once working they work for the ~13 months of support they offer. I know Paul Belanger would like to iterate more quickly and just keep the most recent Fedora available (rather than ~2). > > 3. Use docker images for python and generate the constraints with > them. I've hacked up something we could use as a base for that in: > https://review.openstack.org/581948 > > There are lots of open questions: > - How do we make this nodepool/cloud provider friendly ? > * Currently the containers just talk to the main debian mirrors. > Do we have debian packages? If so we could just do sed magic. http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) should be a working amd64 debian package mirror. > - Do/Can we run a registry per provider? We do not, but we do have a caching dockerhub registry proxy in each region/provider. http://$MIRROR:8081/registry-1.docker if using older docker and http://$MIRROR:8082 for current docker. This was a compromise between caching the Internet and reliability. > - Can we generate and caches these images and only run pip install -U > g-r to speed up the build Between cached upstream python docker images and prebuilt wheels mirrored in every cloud provider region I wonder if this will save a significant amount of time? May be worth starting without this and working from there if it remains slow. > - Are we okay with using docker this way? Should be fine, particularly if we are consuming the official Python images. > > I like #2 the most but I wanted to seek wider feedback. I think each proposed option should work as long as we understand the limitations each presents. #2 should work fine if we have individuals interested and able to spin up new Fedora images and migrate jobs to that image after releases happen. Clark From fungi at yuggoth.org Thu Jul 12 13:52:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 Jul 2018 13:52:56 +0000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> Message-ID: <20180712135256.556k4flo56n3ufkq@yuggoth.org> On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote: [...] > I think most of the problems with Fedora stability are around > bringing up a new Fedora every 6 months or so. They tend to change > sufficiently within that time period to make this a fairly > involved exercise. But once working they work for the ~13 months > of support they offer. I know Paul Belanger would like to iterate > more quickly and just keep the most recent Fedora available > (rather than ~2). [...] Regardless its instability/churn makes it unsuitable for stable branch jobs because the support lifetime of the distro release is shorter than the maintenance lifetime of our stable branches. Would probably be fine for master branch jobs but not beyond, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From haleyb.dev at gmail.com Thu Jul 12 14:10:36 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 12 Jul 2018 10:10:36 -0400 Subject: [openstack-dev] [neutron] Stable review In-Reply-To: References: Message-ID: <3c61db1e-2e22-2d57-0d28-1431d0d57044@gmail.com> On 07/12/2018 02:53 AM, Takashi Yamamoto wrote: > hi, > > queens branch of networking-midonet has had no changes merged since > its creation. > the following commit would tell you how many gate blockers have been > accumulated. > https://review.openstack.org/#/c/572242/ > > it seems the stable team doesn't have a bandwidth to review subprojects > in a timely manner. i'm afraid that we need some policy changes. In the future I would recommend just adding someone from the neutron stable team to the review, as we (I) don't have the bandwidth to go through the reviews of every sub-project. Between Miguel, Armando, Gary and myself we can usually get to things pretty quickly. https://review.openstack.org/#/admin/groups/539,members -Brian From mjturek at linux.vnet.ibm.com Thu Jul 12 14:34:56 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Thu, 12 Jul 2018 10:34:56 -0400 Subject: [openstack-dev] [ironic] Ironic Bug Day July 12 2018 1:00 - 2:00 PM UTC In-Reply-To: <59189c21-797e-1777-1858-6019353889cb@linux.vnet.ibm.com> References: <59189c21-797e-1777-1858-6019353889cb@linux.vnet.ibm.com> Message-ID: <8ca3608c-27c5-d1a8-4857-6cf50886ae7a@linux.vnet.ibm.com> Hey all, This month's bug day went pretty well! We discussed about 20 bugs (half old, half new). Many were triaged, some got marked invalid. For meeting minutes and details, see the etherpad [0]. The attendance was a bit low (Thank you for attending Julia and Adam!), but could be due to vacations that started last week ending. Either way, we decided to confirm the bug day for next month to give ample notice and hopefully improve attendance. I'd also like to encourage people to bring a bug with them that they consider interesting, overlooked, or important next time. Next bug day will be August 2nd @ 13:00 - 14:00 UTC. Etherpad can be found here https://etherpad.openstack.org/p/ironic-bug-day-august-2018 If you have any questions or have any ideas to improve bug day, please don't hesitate to reach out to me! Hope to see you there! Thanks! Mike Turek [0] https://etherpad.openstack.org/p/ironic-bug-day-july-2018 On 7/10/18 4:31 PM, Michael Turek wrote: > Hey all, > > This month's bug day was delayed a week and will take place on > Thursday the 12th from 1:00 UTC to 2:00 UTC > > For location, time, and agenda details please see > https://etherpad.openstack.org/p/ironic-bug-day-july-2018 > > If you would like to propose topics, feel free to do it in the etherpad! > > Thanks, > Mike Turek > From mriedemos at gmail.com Thu Jul 12 14:45:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 12 Jul 2018 09:45:49 -0500 Subject: [openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky? Message-ID: <7ed08ae5-40c3-5a6b-370e-1a06d974a7e5@gmail.com> Continuing the discussion from the nova meeting today [1], I'm trying to figure out what the risk / benefit / contingency is if we don't get the reshaper stuff done in Rocky. In a nutshell, we need reshaper to migrate VGPU inventory for the libvirt and xenapi drivers from the root compute node resource provider to child providers in the compute node provider tree, because then we can support multiple VGPU type inventory on the same compute host. [2] Looking at the status of the vgpu-rocky blueprint [3], the libvirt changes are in merge conflict but the xenapi changes are ready to go. What I'm wondering is if we don't get reshaper done in Rocky, what does that prevent us from doing in Stein? For example, does it mean we can't support modeling NUMA in placement until the T release? Or does it just mean that we lose the upgrade window from Rocky to Stein such that we expect people to run the reshaper migration so that Stein code can assume the migration has been done and model nested resource providers? If the former (no NUMA modeling until T), that's a big deal. If the latter, it makes the Stein code more complicated but it doesn't sound impossible, right? Wouldn't the Stein code just need to add some checking to see if the migration has been done before it can support some new features? Obviously if we don't have reshaper done in Rocky then the xenapi driver can't support multiple VGPU types on the same compute host in Rocky - but isn't that kind of the exact same situation if we don't get reshaper done until Stein? [1] http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71 [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html [3] https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged) -- Thanks, Matt From cjeanner at redhat.com Thu Jul 12 14:45:56 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 12 Jul 2018 16:45:56 +0200 Subject: [openstack-dev] [Tripleo] New validation: ensure we actually have enough disk space on the undercloud Message-ID: <008b5fb0-002e-801c-8328-e1ae8ed911cb@redhat.com> Dear Stackers, I'm currently looking for some inputs in order to get a new validation, ran as a "preflight check" on the undercloud. The aim is to ensure we actually have enough disk space for all the files and, most importantly, the registry, being local on the undercloud, or remote (provided the operator has access to it, of course). Although the doc talks about minimum requirements, there's the "never trust the user inputs" law, so it would be great to ensure the user didn't overlook the requirements regarding disk space. The "right" way would be to add a new validation directly in the tripleo-validations repository, and run it at an early stage of the undercloud deployment (and maybe once again before the overcloud deploy starts, as disk space will probably change due to the registry and logs and packages and so on). There are a few details on this public trello card: https://trello.com/c/QqBsMmP9/89-implement-storage-space-checks What do you think? Care to provide some hints and tips for the correct implementation? Thank you! Bests, C. -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Thu Jul 12 14:47:00 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 12 Jul 2018 10:47:00 -0400 Subject: [openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky? In-Reply-To: <7ed08ae5-40c3-5a6b-370e-1a06d974a7e5@gmail.com> References: <7ed08ae5-40c3-5a6b-370e-1a06d974a7e5@gmail.com> Message-ID: <4399b68b-e751-206b-0e3c-be98015574c9@gmail.com> Let's just get the darn thing done in Rocky. I will have the DB work up for review today. -jay On 07/12/2018 10:45 AM, Matt Riedemann wrote: > Continuing the discussion from the nova meeting today [1], I'm trying to > figure out what the risk / benefit / contingency is if we don't get the > reshaper stuff done in Rocky. > > In a nutshell, we need reshaper to migrate VGPU inventory for the > libvirt and xenapi drivers from the root compute node resource provider > to child providers in the compute node provider tree, because then we > can support multiple VGPU type inventory on the same compute host. [2] > > Looking at the status of the vgpu-rocky blueprint [3], the libvirt > changes are in merge conflict but the xenapi changes are ready to go. > > What I'm wondering is if we don't get reshaper done in Rocky, what does > that prevent us from doing in Stein? For example, does it mean we can't > support modeling NUMA in placement until the T release? Or does it just > mean that we lose the upgrade window from Rocky to Stein such that we > expect people to run the reshaper migration so that Stein code can > assume the migration has been done and model nested resource providers? > > If the former (no NUMA modeling until T), that's a big deal. If the > latter, it makes the Stein code more complicated but it doesn't sound > impossible, right? Wouldn't the Stein code just need to add some > checking to see if the migration has been done before it can support > some new features? > > Obviously if we don't have reshaper done in Rocky then the xenapi driver > can't support multiple VGPU types on the same compute host in Rocky - > but isn't that kind of the exact same situation if we don't get reshaper > done until Stein? > > [1] > http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71 > > [2] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html > > [3] > https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged) > > From hjensas at redhat.com Thu Jul 12 14:56:38 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 12 Jul 2018 16:56:38 +0200 Subject: [openstack-dev] [tripleo] Rocky blueprints In-Reply-To: References: Message-ID: <1af27033c1eefff3145c9227b8759b2f6d456dfb.camel@redhat.com> On Wed, 2018-07-11 at 10:39 -0600, Alex Schultz wrote: > Hello everyone, > > As milestone 3 is quickly approaching, it's time to review the open > blueprints[0] and their status. It appears that we have made good > progress on implementing significant functionality this cycle but we > still have some open items. Below is the list of blueprints that are > still open so we'll want to see if they will make M3 and if not, we'd > like to move them out to Stein and they won't make Rocky without an > FFE. Thanks for the reminder. I'd like an FFE for the tripleo-routed- networks-templates blueprint. (Hope this is formal enough.) > Currently open with pending patches (may need FFE): > > - https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-netwo > rks-templates > I have made quite a bit of progress on this over the last couple of weeks. There is a bit more too do, but the two sets of changes up there does improve things incrementally. All the patches are under this topic: - https://review.openstack.org/#/q/topic:bp/tripleo-routed-networks-tem plates+(status:open+OR+status:merged) If we manage to land the two patch series starting with ... - https://review.openstack.org/579580 and: - https://review.openstack.org/580235 ... completing the ones starting with https://review.openstack.org/5821 80 and a couple of more follow ups should be achievable before RC1. (I will be on PTO after tomorrow, returning August 13.) Over the last couple of days I have also started using rdocloud and OVB. I pushed this pull request yesterday: https://github.com/cybertron /openstack-virtual-baremetal/pull/43. (We should be able to re-use this in CI to get better coverage.) These changes reduce the complexity of configuring routed networks for the end-user greatly. I.e use the same overcloud node network config template for roles in different routed networks, and remove the need to do hiera overrides such as: ComputeLeaf2ExtraConfig: nova::compute::libvirt::vncserver_listen: "%{hiera('internal_api2')}" neutron::agents::ml2::ovs::local_ip: "%{hiera('tenant2')}" [ ... and so on ... ] -- Harald Jensås From bdobreli at redhat.com Thu Jul 12 15:05:40 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 12 Jul 2018 18:05:40 +0300 Subject: [openstack-dev] [tripleo] Rocky blueprints In-Reply-To: References: Message-ID: On 7/11/18 7:39 PM, Alex Schultz wrote: > Hello everyone, > > As milestone 3 is quickly approaching, it's time to review the open > blueprints[0] and their status. It appears that we have made good > progress on implementing significant functionality this cycle but we > still have some open items. Below is the list of blueprints that are > still open so we'll want to see if they will make M3 and if not, we'd > like to move them out to Stein and they won't make Rocky without an > FFE. > > Currently not marked implemented but without any open patches (likely > implemented): > - https://blueprints.launchpad.net/tripleo/+spec/major-upgrade-workflow > - https://blueprints.launchpad.net/tripleo/+spec/tripleo-predictable-ctlplane-ips > > Currently open with pending patches (may need FFE): > - https://blueprints.launchpad.net/tripleo/+spec/config-download-ui > - https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > - https://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud This needs FFE please. The remaining work [0] is mostly cosmetic (defaults switching) though it's somewhat blocked on CI infrastructure readiness [1] for containerized undercloud and overcloud deployments. The situation had been drastically improved by the recent changes though, like longer container images caching, enabling ansible pipelining, using shared local container registries for undercloud and overcloud deployments and may be more I'm missing. There is also ongoing work to mitigate the CI walltime [2]. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132126.html [1] https://trello.com/c/1yDVHmqm/115-switch-remaining-ci-jobs [2] https://trello.com/c/PpNtarue/126-ci-break-the-openstack-infra-3h-timeout-wall > - https://blueprints.launchpad.net/tripleo/+spec/bluestore > - https://blueprints.launchpad.net/tripleo/+spec/gui-node-discovery-by-range > - https://blueprints.launchpad.net/tripleo/+spec/multiarch-support > - https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-templates > - https://blueprints.launchpad.net/tripleo/+spec/sriov-vfs-as-network-interface > - https://blueprints.launchpad.net/tripleo/+spec/custom-validations > > Currently open without work (should be moved to Stein): > - https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing > - https://blueprints.launchpad.net/tripleo/+spec/plan-from-git-in-gui > - https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-react-walkthrough > - https://blueprints.launchpad.net/tripleo/+spec/wrapping-workflow-for-node-operations > - https://blueprints.launchpad.net/tripleo/+spec/ironic-overcloud-ci > > > Please take some time to review this list and update it. If you think > you are close to finishing out the feature and would like to request > an FFE please start getting that together with appropriate details and > justifications for the FFE. > > Thanks, > -Alex > > [0] https://blueprints.launchpad.net/tripleo/rocky > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From e0ne at e0ne.info Thu Jul 12 15:53:10 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 12 Jul 2018 18:53:10 +0300 Subject: [openstack-dev] [horizon] Planning Etherpad for Denver PTG Message-ID: Hi team, I've created an etherpad [1] to gather topics for PTG discussions in Denver. Please, do not hesitate to add any topic you think is valuable even you won't attend PTG. I hope to see all of you in September! [1] https://etherpad.openstack.org/p/horizon-ptg-planning-denver-2018 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Jul 12 16:05:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 12 Jul 2018 11:05:09 -0500 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <20180712135256.556k4flo56n3ufkq@yuggoth.org> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <20180712135256.556k4flo56n3ufkq@yuggoth.org> Message-ID: <20180712160509.4caksal22ocw6klr@gentoo.org> On 18-07-12 13:52:56, Jeremy Stanley wrote: > On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote: > [...] > > I think most of the problems with Fedora stability are around > > bringing up a new Fedora every 6 months or so. They tend to change > > sufficiently within that time period to make this a fairly > > involved exercise. But once working they work for the ~13 months > > of support they offer. I know Paul Belanger would like to iterate > > more quickly and just keep the most recent Fedora available > > (rather than ~2). > [...] > > Regardless its instability/churn makes it unsuitable for stable > branch jobs because the support lifetime of the distro release is > shorter than the maintenance lifetime of our stable branches. Would > probably be fine for master branch jobs but not beyond, right? I'm of the opinion that we should decouple from distro supported python versions and rely on what versions upstream python supports (longer lifetimes than our releases iirc). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jungleboyj at gmail.com Thu Jul 12 16:21:51 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 12 Jul 2018 11:21:51 -0500 Subject: [openstack-dev] [OSSN-0084] Data retained after deletion of a ScaleIO volume In-Reply-To: References: <2dc4a6ce-b5a3-8cee-724c-4295fd595f54@redhat.com> Message-ID: On 7/11/2018 1:20 AM, Luke Hinds wrote: > > > On Tue, Jul 10, 2018 at 9:08 PM, Jim Rollenhagen > > wrote: > > On Tue, Jul 10, 2018 at 3:28 PM, Martin Chlumsky > > wrote: > > It is the workaround that is right and the discussion part > that is wrong. > > I am familiar with this bug. Using thin volumes > _and/or_ enabling zero padding DOES ensure data contained > in a volume is actually deleted. > > > Great, that's super helpful. Thanks! > > Is there someone (Luke?) on the list that can send a correction > for this OSSN to all the lists it needs to go to? > > // jim > > > It can, but I would want to be sure we get an agreed consensus. The > note has already gone through a review cycle where a cinder core > approved the contents: > > https://review.openstack.org/#/c/579094/ > > If someone wants to put forward a patch with the needed amendments , I > can send out a correction to the lists. > All, I have forwarded this note on to Helen Walsh at Dell EMC (Walsh, Helen ) as they do not monitor the mailing list as closely.  Hopefully we can get her engaged to ensure we get the right update out there. Thanks! > > On Tue, Jul 10, 2018 at 10:41 AM Jim Rollenhagen > > wrote: > > On Tue, Jul 10, 2018 at 4:20 AM, Luke Hinds > > wrote: > > Data retained after deletion of a ScaleIO volume > --- > > ### Summary ### > Certain storage volume configurations allow newly > created volumes to > contain previous data. This could lead to leakage of > sensitive > information between tenants. > > ### Affected Services / Software ### > Cinder releases up to and including Queens with > ScaleIO volumes > using thin volumes and zero padding. > > > According to discussion in the bug, this bug occurs with > ScaleIO volumes using thick volumes and with zero padding > disabled. > > If the bug is with thin volumes and zero padding, then the > workaround seems quite wrong. :) > > I'm not super familiar with Cinder, so could some Cinder > folks check this out and re-issue a more accurate OSSN, > please? > > // jim > > > ### Discussion ### > Using both thin volumes and zero padding does not > ensure data contained > in a volume is actually deleted. The default volume > provisioning rule is > set to thick so most installations are likely not > affected. Operators > can check their configuration in `cinder.conf` or > check for zero padding > with this command `scli --query_all`. > > #### Recommended Actions #### > > Operators can use the following two workarounds, until > the release of > Rocky (planned 30th August 2018) which resolves the issue. > > 1. Swap to thin volumes > > 2. Ensure ScaleIO storage pools use zero-padding with: > > `scli --modify_zero_padding_policy >     (((--protection_domain_id | >     --protection_domain_name ) >     --storage_pool_name ) | --storage_pool_id ) >     (--enable_zero_padding | --disable_zero_padding)` > > ### Contacts / References ### > Author: Nick Tait > This OSSN : > https://wiki.openstack.org/wiki/OSSN/OSSN-0084 > > Original LaunchPad Bug : > https://bugs.launchpad.net/ossn/+bug/1699573 > > Mailing List : [Security] tag on > openstack-dev at lists.openstack.org > > OpenStack Security Project : > https://launchpad.net/~openstack-ossg > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage > questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com  | irc: lhinds > @freenode |t: +44 12 52 36 2483 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Jul 12 16:25:22 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Jul 2018 11:25:22 -0500 Subject: [openstack-dev] [oslo] Stein PTG planning etherpad Message-ID: All the cool kids are doing it, so here's one for Oslo: https://etherpad.openstack.org/p/oslo-stein-ptg-planning I've populated it with a few topics that I expect to discuss, but feel free to add anything you're interested in. Thanks. -Ben From msm at redhat.com Thu Jul 12 16:31:21 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 12 Jul 2018 12:31:21 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was very brief as both cdent and dtantsur were out. There were no major items of discussion, but we did acknowledge the efforts of the GraphQL proof of concept work[7] being led by Gilles Dubreuil. This work continues to make progress and should provide an interesting data point for the possibiity of future GraphQL usages. In addition to the light discussion there was also one guideline update that was merged this week, and a small infrastructure-related patch that was merged. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Add links to errors-example.json https://review.openstack.org/#/c/578369/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://storyboard.openstack.org/#!/story/2002782 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From mordred at inaugust.com Thu Jul 12 16:31:34 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 12 Jul 2018 11:31:34 -0500 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> Message-ID: <2975d14a-bfe7-5fe4-7b07-7419787bd17c@inaugust.com> On 07/12/2018 08:37 AM, Clark Boylan wrote: > On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote: >> Hi Folks, >> We have a pit of a problem in openstack/requirements and I'd liek to >> chat about it. >> >> Currently when we generate constraints we create a venv for each >> (system) python supplied on the command line, install all of >> global-requirements into that venv and capture the pip freeze. >> >> Where this falls down is if we want to generate a freeze for python 3.4 >> and 3.5 we need an image that has both of those. We cheated and just >> 'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice >> versa. This kinda worked for a while but it has drawbacks. >> >> I can see a few of options: >> >> 1. Build pythons from source and use that to construct the venv >> [please no] > > Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can ignore them and focus on 3.5 and forward? We don't build new freeze lists for the stable branches, this is just a concern for master right? FWIW, I use pyenv for python versions on my laptop and love it. I've completely given up on distro-provided python for my own usage. >> >> 2. Generate the constraints in an F28 image. My F28 has ample python >> versions: >> - /usr/bin/python2.6 >> - /usr/bin/python2.7 >> - /usr/bin/python3.3 >> - /usr/bin/python3.4 >> - /usr/bin/python3.5 >> - /usr/bin/python3.6 >> - /usr/bin/python3.7 >> I don't know how valid this still is but in the past fedora images >> have been seen as unstable and hard to keep current. If that isn't >> still the feeling then we could go down this path. Currently there a >> few minor problems with bindep.txt on fedora and generate-constraints >> doesn't work with py3 but these are pretty minor really. > > I think most of the problems with Fedora stability are around bringing up a new Fedora every 6 months or so. They tend to change sufficiently within that time period to make this a fairly involved exercise. But once working they work for the ~13 months of support they offer. I know Paul Belanger would like to iterate more quickly and just keep the most recent Fedora available (rather than ~2). > >> >> 3. Use docker images for python and generate the constraints with >> them. I've hacked up something we could use as a base for that in: >> https://review.openstack.org/581948 >> >> There are lots of open questions: >> - How do we make this nodepool/cloud provider friendly ? >> * Currently the containers just talk to the main debian mirrors. >> Do we have debian packages? If so we could just do sed magic. > > http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) should be a working amd64 debian package mirror. > >> - Do/Can we run a registry per provider? > > We do not, but we do have a caching dockerhub registry proxy in each region/provider. http://$MIRROR:8081/registry-1.docker if using older docker and http://$MIRROR:8082 for current docker. This was a compromise between caching the Internet and reliability. there is also https://review.openstack.org/#/c/580730/ which adds a role to install docker and configure it to use the correct registry. >> - Can we generate and caches these images and only run pip install -U >> g-r to speed up the build > > Between cached upstream python docker images and prebuilt wheels mirrored in every cloud provider region I wonder if this will save a significant amount of time? May be worth starting without this and working from there if it remains slow. > >> - Are we okay with using docker this way? > > Should be fine, particularly if we are consuming the official Python images. Agree. python:3.6 and friends are great. >> >> I like #2 the most but I wanted to seek wider feedback. > > I think each proposed option should work as long as we understand the limitations each presents. #2 should work fine if we have individuals interested and able to spin up new Fedora images and migrate jobs to that image after releases happen. > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mordred at inaugust.com Thu Jul 12 16:33:02 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 12 Jul 2018 11:33:02 -0500 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <20180712160509.4caksal22ocw6klr@gentoo.org> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <20180712135256.556k4flo56n3ufkq@yuggoth.org> <20180712160509.4caksal22ocw6klr@gentoo.org> Message-ID: On 07/12/2018 11:05 AM, Matthew Thode wrote: > On 18-07-12 13:52:56, Jeremy Stanley wrote: >> On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote: >> [...] >>> I think most of the problems with Fedora stability are around >>> bringing up a new Fedora every 6 months or so. They tend to change >>> sufficiently within that time period to make this a fairly >>> involved exercise. But once working they work for the ~13 months >>> of support they offer. I know Paul Belanger would like to iterate >>> more quickly and just keep the most recent Fedora available >>> (rather than ~2). >> [...] >> >> Regardless its instability/churn makes it unsuitable for stable >> branch jobs because the support lifetime of the distro release is >> shorter than the maintenance lifetime of our stable branches. Would >> probably be fine for master branch jobs but not beyond, right? > > I'm of the opinion that we should decouple from distro supported python > versions and rely on what versions upstream python supports (longer > lifetimes than our releases iirc). Yeah. I don't want to boil the ocean too much ... but as I mentioned in my other reply, I'm very pleased with pyenv. I would not be opposed to switching to that for all of our python installation needs. OTOH, I'm not going to push for it, nor do I have time to implement such a switch. But I'd vote for it and cheer someone on if they did. From doug at doughellmann.com Thu Jul 12 17:20:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 12 Jul 2018 13:20:14 -0400 Subject: [openstack-dev] [release][ptl] Release countdown for week R-6, July 16-20 Message-ID: <1531415931-sup-9945@lrrr.local> Development Focus ----------------- Teams should be focused on implementing planned work. Work should be wrapping up on non-client libraries to meet the lib deadline Thursday, the 19th. General Information ------------------- We are now getting close to the end of the cycle. The non-client library (typically any lib other than the "python-$PROJECTclient" deliverables) deadline is 19 July, followed quickly the next Thursday with the final client library release. Releases for critical fixes will be allowed after this point, but we will be much more restrictive about what is allowed if there are more lib release requests after this point. Please keep this in mind. When requesting these library releases, you should also include the stable branching request with the review (as an example, see the "branches" section here: http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 From clint at fewbar.com Thu Jul 12 20:02:59 2018 From: clint at fewbar.com (Clint Byrum) Date: Thu, 12 Jul 2018 13:02:59 -0700 Subject: [openstack-dev] [kolla][nova] Safe guest shutdowns with kolla? Message-ID: <153142577960.6991.11153929931053474192@ubuntu> Greetings! We've been deploying with Kolla on CentOS 7 now for a while, and we've recently noticed a rather troubling behavior when we shutdown hypervisors. Somewhere between systemd and libvirt's systemd-machined integration, we see that guests get killed aggressively by SIGTERM'ing all of the qemu-kvm processes. This seems to happen because they are scoped into machine.slice, but systemd-machined is killed which drops those scopes and thus results in killing off the machines. In the past, we've used the libvirt-guests service when our libvirt was running outside of containers. This worked splendidly, as we could have it wait 5 minutes for VMs to attempt a graceful shutdown, avoiding interrupting any running processes. But this service isn't available on the host OS, as it won't be able to talk to libvirt inside the container. The solution I've come up with for now is this: [Unit] Description=Manage libvirt guests in kolla safely After=docker.service systemd-machined.service Requires=docker.service [Install] WantedBy=sysinit.target [Service] Type=oneshot RemainAfterExit=yes TimeoutStopSec=400 ExecStart=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh start ExecStart=/usr/bin/docker start nova_compute ExecStop=/usr/bin/docker stop nova_compute ExecStop=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh shutdown This doesn't seem to work, though I'm still trying to work out the ordering and such. It should ensure that before we stop the systemd-machined and destroy all of its scopes (thus, killing all the vms), we run the libvirt-guests.sh script to try and shut them down. The TimeoutStopSec=400 is because the script itself waits 300 seconds for any VM that refuses to shutdown cleanly, so this gives it a chance to wait for at least one of those. This is an imperfect solution but it allows us to move forward after having made a reasonable attempt at clean shutdowns. Anyway, just wondering if anybody else using kolla-ansible or kolla containers in general have run into this problem, and whether or not there are better/known solutions. Thanks! From zigo at debian.org Thu Jul 12 20:38:48 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 12 Jul 2018 22:38:48 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate Message-ID: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> Hi everyone! It's yet another of these emails where I'm going to complain out of frustration because of OpenStack having bugs when running with the newest stuff... Sorry in advance ! :) tl;dr: It's urgent, we need Python 3.7 uwsgi + SSL gate jobs. Longer version: When Python 3.6 reached Debian, i already forwarded a few patches. It went quite ok, but still... When switching services to Python 3 for Newton, I discover that many services still had issues with uwsgi / mod_wsgi, and I spent a large amount of time trying to figure out ways to fix the situation. Some patches are still not yet merged, even though it was a community goal to have this support for Newton: Neutron: https://review.openstack.org/#/c/555608/ https://review.openstack.org/#/c/580049/ Neutron FWaaS: https://review.openstack.org/#/c/580327/ https://review.openstack.org/#/c/579433/ Horizon tempest plugin: https://review.openstack.org/#/c/575714/ Oslotet (clearly, the -1 is for someone considering only Devstack / venv, not understanding packaging environment): https://review.openstack.org/#/c/571962/ Designate: As much as I know, it still doesn't support uwsgi / mod_wsgi (please let me know if this changed recently). There may be more, I didn't have much time investigating some projects which are less important to me. Now, both Debian and Ubuntu have Python 3.7. Every package which I upload in Sid need to support that. Yet, OpenStack's CI is still lagging with Python 3.5. And there's lots of things currently broken. We've fixed most "async" stuff, though we are failing to rebuild oslo.messaging (from Queens) with Python 3.7: unit tests are just hanging doing nothing. I'm very happy to do small contributions to each and every component here and there whenever it's possible, but this time, it's becoming a little bit frustrating. I sometimes even got replies like "hum ... OpenStack only supports Python 3.5" a few times. That's not really acceptable, unfortunately. So moving forward, what I think needs to happen is: - Get each and every project to actually gate using uwsgi for the API, using both Python 3 and SSL (any other test environment is *NOT* a real production environment). - The gating has to happen with whatever is the latest Python 3 version available. Best would even be if we could have that *BEFORE* it reaches distributions like Debian and Ubuntu. I'm aware that there's been some attempts in the OpenStack infra to have Debian Sid (which is probably the distribution getting the updates the faster). This effort needs to be restarted, and some (non-voting ?) gate jobs needs to be setup using whatever the latest thing is. If it cannot happen with Sid, then I don't know, choose another platform, and do the Python 3-latest gating... The current situation with the gate still doing Python 3.5 only jobs is just not sustainable anymore. Moving forward, Python 2.7 will die. When this happens, moving faster with Python 3 versions will be mandatory for everyone, not only for fools like me who made the switch early. :) Cheers, Thomas Goirand (zigo) P.S: A big thanks to everyone who where helpful for making the switch to Python 3 in Debian, especially Annp and the rest of the Neutron team. From jaypipes at gmail.com Thu Jul 12 21:02:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 12 Jul 2018 17:02:47 -0400 Subject: [openstack-dev] [nova] What do we lose if the reshaper stuff doesn't land in Rocky? In-Reply-To: <7ed08ae5-40c3-5a6b-370e-1a06d974a7e5@gmail.com> References: <7ed08ae5-40c3-5a6b-370e-1a06d974a7e5@gmail.com> Message-ID: DB work is now pushed for the single transaction reshape() function: https://review.openstack.org/#/c/582383 Note that in working on that, I uncovered a bug in AllocationList.delete_all() which needed to first be fixed: https://bugs.launchpad.net/nova/+bug/1781430 A fix has been pushed here: https://review.openstack.org/#/c/582382/ Best, -jay On 07/12/2018 10:45 AM, Matt Riedemann wrote: > Continuing the discussion from the nova meeting today [1], I'm trying to > figure out what the risk / benefit / contingency is if we don't get the > reshaper stuff done in Rocky. > > In a nutshell, we need reshaper to migrate VGPU inventory for the > libvirt and xenapi drivers from the root compute node resource provider > to child providers in the compute node provider tree, because then we > can support multiple VGPU type inventory on the same compute host. [2] > > Looking at the status of the vgpu-rocky blueprint [3], the libvirt > changes are in merge conflict but the xenapi changes are ready to go. > > What I'm wondering is if we don't get reshaper done in Rocky, what does > that prevent us from doing in Stein? For example, does it mean we can't > support modeling NUMA in placement until the T release? Or does it just > mean that we lose the upgrade window from Rocky to Stein such that we > expect people to run the reshaper migration so that Stein code can > assume the migration has been done and model nested resource providers? > > If the former (no NUMA modeling until T), that's a big deal. If the > latter, it makes the Stein code more complicated but it doesn't sound > impossible, right? Wouldn't the Stein code just need to add some > checking to see if the migration has been done before it can support > some new features? > > Obviously if we don't have reshaper done in Rocky then the xenapi driver > can't support multiple VGPU types on the same compute host in Rocky - > but isn't that kind of the exact same situation if we don't get reshaper > done until Stein? > > [1] > http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-07-12-14.00.log.html#l-71 > > [2] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/vgpu-rocky.html > > [3] > https://review.openstack.org/#/q/topic:bp/vgpu-rocky+(status:open+OR+status:merged) > > From openstack at fried.cc Thu Jul 12 21:29:01 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 12 Jul 2018 16:29:01 -0500 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: <1531169863-sup-281@lrrr.local> References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> <1531169863-sup-281@lrrr.local> Message-ID: Here it is for nova. https://review.openstack.org/#/c/582392/ >> also don't love that immediately bumping the lower bound for tox is >> going to be kind of disruptive to a lot of people. By "kind of disruptive," do you mean: $ tox -e blah ERROR: MinVersionError: tox version is 1.6, required is at least 3.1.1 $ sudo pip install --upgrade tox $ tox -e blah ? Thanks, efried On 07/09/2018 03:58 PM, Doug Hellmann wrote: > Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500: >> >> On 07/09/2018 11:16 AM, Eric Fried wrote: >>> Doug- >>> >>> How long til we can start relying on the new behavior in the gate? I >>> gots me some basepython to purge... >> >> I want to point out that most projects require a rather old version of >> tox, so chances are most people are not staying up to date with the very >> latest version. I don't love the repetition in tox.ini right now, but I >> also don't love that immediately bumping the lower bound for tox is >> going to be kind of disruptive to a lot of people. >> >> 1: http://codesearch.openstack.org/?q=minversion&i=nope&files=tox.ini&repos= > > Good point. Any patches to clean up the repetition should probably > go ahead and update that minimum version setting, too. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Thu Jul 12 23:35:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 12 Jul 2018 19:35:01 -0400 Subject: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks In-Reply-To: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> References: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> Message-ID: On Tue, Jul 10, 2018 at 10:22 AM Jiří Stránský wrote: > Hi, > > with the move to config-download deployments, we'll be moving from > executing external installers (like ceph-ansible) via Heat resources > encapsulating Mistral workflows towards executing them via Ansible > directly (nested Ansible process via external_deploy_tasks). > > Updates and upgrades still need to be addressed here. I think we should > introduce external_update_tasks and external_upgrade_tasks for this > purpose, but i see two options how to construct the workflow with them. > > During update (mentioning just updates, but upgrades would be done > analogously) we could either: > > A) Run external_update_tasks, then external_deploy_tasks. > > This works with the assumption that updates are done very similarly to > deployment. The external_update_tasks could do some prep work and/or > export Ansible variables which then could affect what > external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably > override the playbook path). This way we could also disable specific > parts of external_deploy_tasks on update, in case reuse is undesirable > in some places. > > B) Run only external_update_tasks. > > This would mean code for updates/upgrades of externally deployed > services would be completely separated from how their deployment is > done. If we wanted to reuse some of the deployment tasks, we'd have to > use the YAML anchor referencing mechanisms. (&anchor, *anchor) > > I think the options are comparable in terms of what is possible to > implement with them, the main difference is what use cases we want to > optimize for. > > Looking at what we currently have in external_deploy_tasks (e.g. > [1][2]), i think we'd have to do a lot of explicit reuse if we went with > B (inventory and variables generation, ...). So i'm leaning towards > option A (WIP patch at [3]) which should give us this reuse more > naturally. This approach would also be more in line with how we already > do normal updates and upgrades (also reusing deployment tasks). Please > let me know in case you have any concerns about such approach (looking > especially at Ceph and OpenShift integrators :) ). > +1 for Option A as well, I feel like it's the one which would give us the more of flexibility and also I'm not a big fan of the usage of Anchors for this use case. Some folks are currently working on extracting these tasks out of THT and I can already see something like: external_deploy_tasks - include_role: name: my-service tasks_from: deploy external_update_tasks - include_role: name: my-service tasks_from: update Or we could re-use the same playbooks, but use tags maybe. Anyway, I like your proposal and I vote for option A. > Thanks > > Jirka > > [1] > > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/docker/services/ceph-ansible/ceph-base.yaml#L340-L467 > [2] > > https://github.com/openstack/tripleo-heat-templates/blob/8d7525fdf79f915e3f880ea0f3fd299234ecc635/extraconfig/services/openshift-master.yaml#L70-L231 > [3] https://review.openstack.org/#/c/579170/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Jul 12 23:54:09 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 09:54:09 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> Message-ID: <20180712235409.GD22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 06:37:52AM -0700, Clark Boylan wrote: > On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote: > > 1. Build pythons from source and use that to construct the venv > > [please no] > > Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can ignore them and focus on 3.5 and forward? We don't build new freeze lists for the stable branches, this is just a concern for master right? The focus is master, but it came up in the context of shoudl we just remove the python_version=='3.4', it turns out that at least one OS that will supported rock will be running with python 3.4 so while 3.4 is EOL I have to admit I'd quite like to be able to keep the 3.4 stuff around for rocky (and probably stein). It isn't a hard requirement. > > 2. Generate the constraints in an F28 image. My F28 has ample python > > versions: > > - /usr/bin/python2.6 > > - /usr/bin/python2.7 > > - /usr/bin/python3.3 > > - /usr/bin/python3.4 > > - /usr/bin/python3.5 > > - /usr/bin/python3.6 > > - /usr/bin/python3.7 > > I don't know how valid this still is but in the past fedora images > > have been seen as unstable and hard to keep current. If that isn't > > still the feeling then we could go down this path. Currently there a > > few minor problems with bindep.txt on fedora and generate-constraints > > doesn't work with py3 but these are pretty minor really. > > I think most of the problems with Fedora stability are around bringing up a new Fedora every 6 months or so. They tend to change sufficiently within that time period to make this a fairly involved exercise. But once working they work for the ~13 months of support they offer. I know Paul Belanger would like to iterate more quickly and just keep the most recent Fedora available (rather than ~2). Ok that's good context. It isn't that once the images are built they break it that they're hardish to build in the first place. I'd love to think that between Paul, Ian and I we'd be okay here but then again I don't really know what I'm saying ;P > > 3. Use docker images for python and generate the constraints with > > them. I've hacked up something we could use as a base for that in: > > https://review.openstack.org/581948 > > > > There are lots of open questions: > > - How do we make this nodepool/cloud provider friendly ? > > * Currently the containers just talk to the main debian mirrors. > > Do we have debian packages? If so we could just do sed magic. > > http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) should be a working amd64 debian package mirror. \o/ > > - Do/Can we run a registry per provider? > > We do not, but we do have a caching dockerhub registry proxy in each region/provider. http://$MIRROR:8081/registry-1.docker if using older docker and http://$MIRROR:8082 for current docker. This was a compromise between caching the Internet and reliability. That'll do as long as it's easy to configure or transparent. > > - Can we generate and caches these images and only run pip install -U > > g-r to speed up the build > > Between cached upstream python docker images and prebuilt wheels mirrored in every cloud provider region I wonder if this will save a significant amount of time? May be worth starting without this and working from there if it remains slow. Yeah it may be that I'm over thinking it. For me (locally) it's really slow but perhaps with infrastructure you've mentioned it isn't worth it. Certainly something to look at later if it's a problem. > > - Are we okay with using docker this way? > > Should be fine, particularly if we are consuming the official Python images. Yup that's the plan. I've sent a PR to get some images we'd need built that aren't there today. > > > > > I like #2 the most but I wanted to seek wider feedback. > > I think each proposed option should work as long as we understand the limitations each presents. #2 should work fine if we have individuals interested and able to spin up new Fedora images and migrate jobs to that image after releases happen. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Jul 12 23:55:50 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 09:55:50 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <20180712135256.556k4flo56n3ufkq@yuggoth.org> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <20180712135256.556k4flo56n3ufkq@yuggoth.org> Message-ID: <20180712235549.GE22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 01:52:56PM +0000, Jeremy Stanley wrote: > On 2018-07-12 06:37:52 -0700 (-0700), Clark Boylan wrote: > [...] > > I think most of the problems with Fedora stability are around > > bringing up a new Fedora every 6 months or so. They tend to change > > sufficiently within that time period to make this a fairly > > involved exercise. But once working they work for the ~13 months > > of support they offer. I know Paul Belanger would like to iterate > > more quickly and just keep the most recent Fedora available > > (rather than ~2). > [...] > > Regardless its instability/churn makes it unsuitable for stable > branch jobs because the support lifetime of the distro release is > shorter than the maintenance lifetime of our stable branches. Would > probably be fine for master branch jobs but not beyond, right? Yup we only run the generate job on master, once we branch it's up to poeple to update/review the lists. So I'd hope that we'd have f28 and f29 overlap and roll forward as needed/able Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Jul 12 23:58:21 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 09:58:21 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <2975d14a-bfe7-5fe4-7b07-7419787bd17c@inaugust.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <2975d14a-bfe7-5fe4-7b07-7419787bd17c@inaugust.com> Message-ID: <20180712235821.GF22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 11:31:34AM -0500, Monty Taylor wrote: > FWIW, I use pyenv for python versions on my laptop and love it. I've > completely given up on distro-provided python for my own usage. Hmm okay I'll look at that and how it'd play with the generate job. It's quite possible I'm being short sighted but I'd really like to *not* have to build anything. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Jul 13 00:01:11 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 10:01:11 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <20180712160509.4caksal22ocw6klr@gentoo.org> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <20180712135256.556k4flo56n3ufkq@yuggoth.org> <20180712160509.4caksal22ocw6klr@gentoo.org> Message-ID: <20180713000111.GG22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 11:05:09AM -0500, Matthew Thode wrote: > I'm of the opinion that we should decouple from distro supported python > versions and rely on what versions upstream python supports (longer > lifetimes than our releases iirc). Using docker/pyenv does this decoupling but I'm not convinced that any option really means that we dont' end up running something that's EOL somewhere. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Jul 13 00:07:44 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 10:07:44 +1000 Subject: [openstack-dev] [requirements][infra] Maintaining constraints for several python versions In-Reply-To: <2975d14a-bfe7-5fe4-7b07-7419787bd17c@inaugust.com> References: <20180712043457.GA22285@thor.bakeyournoodle.com> <1531402672.2677182.1438491376.070BFAE6@webmail.messagingengine.com> <2975d14a-bfe7-5fe4-7b07-7419787bd17c@inaugust.com> Message-ID: <20180713000743.GH22285@thor.bakeyournoodle.com> On Thu, Jul 12, 2018 at 11:31:34AM -0500, Monty Taylor wrote: > there is also > > https://review.openstack.org/#/c/580730/ > > which adds a role to install docker and configure it to use the correct > registry. oooo shiny! That'll take care of all the docker setup nice! Can I create a job that Depends-On that one and see what happens when I try to build/run containers? /me suspects so but sometimes I like to check :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Jul 13 00:12:04 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 13 Jul 2018 10:12:04 +1000 Subject: [openstack-dev] [tripleo] Rocky blueprints In-Reply-To: References: Message-ID: <20180713001204.GI22285@thor.bakeyournoodle.com> On Wed, Jul 11, 2018 at 10:39:30AM -0600, Alex Schultz wrote: > Currently open with pending patches (may need FFE): > - https://blueprints.launchpad.net/tripleo/+spec/multiarch-support I'd like an FFE for this, the open reviews are in pretty good shape and mostly merged. (or +W'd). We'll need another tripleo-common release after https://review.openstack.org/537768 merges which I'd really like to do next week if possible. There is some cleanup that can be done but nothing that's *needed* for rocky. After that there is still a validation that I need to write, and docs to update. I appreciate the help and support I've had from the TripleO community to get to this point. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From lars at redhat.com Fri Jul 13 02:17:25 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 12 Jul 2018 22:17:25 -0400 Subject: [openstack-dev] [tripleo][pre] removing default ssh rule from tripleo::firewall::pre Message-ID: <20180713021725.6l23xllecv62m3d6@redhat.com> I've had a few operators complain about the permissive rule tripleo creates for ssh. The current alternatives seems to be to either disable tripleo firewall management completely, or move from the default-deny model to a set of rules that include higher-priority blacklist rules for ssh traffic. I've just submitted a pair of reviews [1] that (a) remove the default "allow ssh from everywhere" rule in tripleo::firewall:pre and (b) add a DefaultFirewallRules parameter to the tripleo-firewall service. The default value for this new parameter is the same rule that was previously in tripleo::firewall::pre, but now it can be replaced by an operator as part of the deployment configuration. For example, a deployment can include: parameter_defaults: DefaultFirewallRules: tripleo.tripleo_firewall.firewall_rules: '003 allow ssh from internal networks': source: '172.16.0.0/22' proto: 'tcp' dport: 22 '003 allow ssh from bastion host': source: '192.168.1.10' proto: 'tcp' dport: 22 [1] https://review.openstack.org/#/q/topic:feature/firewall%20(status:open%20OR%20status:merged) -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From yamamoto at midokura.com Fri Jul 13 05:27:26 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 13 Jul 2018 14:27:26 +0900 Subject: [openstack-dev] [taas] LP project changes In-Reply-To: References: Message-ID: i went through existing bugs and prioritized them. i'd recommend the others to do the same. there are not too many of them. i also updated series and milestones. On Mon, Jul 2, 2018 at 7:02 PM, Takashi Yamamoto wrote: > hi, > > I created a LP team "tap-as-a-service-drivers", > whose initial members are same as the existing tap-as-a-service-core > group on gerrit. > I made the team the Maintainer and Driver of the tap-as-a-service project. > This way, someone in the team can take it over even if I disappeared > suddenly. :-) From bdobreli at redhat.com Fri Jul 13 07:54:17 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 13 Jul 2018 10:54:17 +0300 Subject: [openstack-dev] [kolla][nova][tripleo] Safe guest shutdowns with kolla? In-Reply-To: <153142577960.6991.11153929931053474192@ubuntu> References: <153142577960.6991.11153929931053474192@ubuntu> Message-ID: <835788b2-d843-672f-e07d-836ae5785f86@redhat.com> [Added tripleo] It would be nice to have this situation verified/improved for containerized libvirt for compute nodes deployed with TripleO as well. On 7/12/18 11:02 PM, Clint Byrum wrote: > Greetings! We've been deploying with Kolla on CentOS 7 now for a while, and > we've recently noticed a rather troubling behavior when we shutdown > hypervisors. > > Somewhere between systemd and libvirt's systemd-machined integration, > we see that guests get killed aggressively by SIGTERM'ing all of the > qemu-kvm processes. This seems to happen because they are scoped into > machine.slice, but systemd-machined is killed which drops those scopes > and thus results in killing off the machines. So far we had observed the similar [0] happening, but to systemd vs containers managed by docker-daemon (dockerd). [0] https://bugs.launchpad.net/tripleo/+bug/1778913 > > In the past, we've used the libvirt-guests service when our libvirt was > running outside of containers. This worked splendidly, as we could > have it wait 5 minutes for VMs to attempt a graceful shutdown, avoiding > interrupting any running processes. But this service isn't available on > the host OS, as it won't be able to talk to libvirt inside the container. > > The solution I've come up with for now is this: > > [Unit] > Description=Manage libvirt guests in kolla safely > After=docker.service systemd-machined.service > Requires=docker.service > > [Install] > WantedBy=sysinit.target > > [Service] > Type=oneshot > RemainAfterExit=yes > TimeoutStopSec=400 > ExecStart=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh start > ExecStart=/usr/bin/docker start nova_compute > ExecStop=/usr/bin/docker stop nova_compute > ExecStop=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh shutdown > > This doesn't seem to work, though I'm still trying to work out > the ordering and such. It should ensure that before we stop the > systemd-machined and destroy all of its scopes (thus, killing all the > vms), we run the libvirt-guests.sh script to try and shut them down. The > TimeoutStopSec=400 is because the script itself waits 300 seconds for any > VM that refuses to shutdown cleanly, so this gives it a chance to wait > for at least one of those. This is an imperfect solution but it allows us > to move forward after having made a reasonable attempt at clean shutdowns. > > Anyway, just wondering if anybody else using kolla-ansible or kolla > containers in general have run into this problem, and whether or not > there are better/known solutions. As I noted above, I think the issue may be valid for TripleO as well. > > Thanks! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jistr at redhat.com Fri Jul 13 09:08:21 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 13 Jul 2018 11:08:21 +0200 Subject: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks In-Reply-To: References: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> Message-ID: Thanks for the feedback, John and Emilien. On 13.7.2018 01:35, Emilien Macchi wrote: > +1 for Option A as well, I feel like it's the one which would give us the > more of flexibility and also I'm not a big fan of the usage of Anchors for > this use case. > Some folks are currently working on extracting these tasks out of THT and I > can already see something like: > > external_deploy_tasks > - include_role: > name: my-service > tasks_from: deploy > > external_update_tasks > - include_role: > name: my-service > tasks_from: update > > Or we could re-use the same playbooks, but use tags maybe. > Anyway, I like your proposal and I vote for option A. I like the tasks_from approach in the snippet. Regarding tags, i'm currently thinking of using them to optionally update/upgrade individual services which make use of external_*_tasks. E.g. in an environment with both OpenShift and Ceph, i'm hoping we could run: openstack overcloud external-update run --tags ceph openstack overcloud external-update run --tags openshift to update them separately if needed. That's the way i'm trying to prototype it right now anyway, open to feedback. Jirka From huangfuzeyi at gmail.com Fri Jul 13 11:10:12 2018 From: huangfuzeyi at gmail.com (Enoch Huangfu) Date: Fri, 13 Jul 2018 19:10:12 +0800 Subject: [openstack-dev] Need help on this neutron-server start error with vmware_nsx plugin enable Message-ID: env: openstack queen version on centos7 latest vmware_nsx plugin rpm installed: python-networking-vmware-nsx-12.0.1 when i modify 'core_plugin' value in [default] section of /etc/neutron/neutron.conf from ml2 to vmware_nsx.plugin.NsxDvsPlugin, then try to start neutron-server with command 'systemctl start neutron-server' on control node, the log shows: 2018-07-13 17:57:50.802 25653 INFO neutron.manager [-] Loading core plugin: vmware_nsx.plugin.NsxDvsPlugin 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_create subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_update subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_delete subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.366 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: router_gateway before_create subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.393 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_create subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_update subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > rbac-policy before_delete subscribe /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime [-] Error loading class by alias: NoMatches: No 'neutron.core_plugins' driver found, looking for 'vmware_nsx.plugin.NsxDvsPlugin' 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime Traceback (most recent call last): 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 46, in load_class_by_alias_or_classname 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime namespace, name, warn_on_missing_entrypoint=False) 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 61, in __init__ 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime warn_on_missing_entrypoint=warn_on_missing_entrypoint 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 89, in __init__ 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime self._init_plugins(extensions) 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 113, in _init_plugins 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime (self.namespace, name)) 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime NoMatches: No 'neutron.core_plugins' driver found, looking for 'vmware_nsx.plugin.NsxDvsPlugin' 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime [-] Error loading class by class name: ImportError: No module named neutron_fwaas.db.firewall 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime Traceback (most recent call last): 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 52, in load_class_by_alias_or_classname 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime class_to_load = importutils.import_class(name) 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 30, in import_class 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime __import__(mod_str) 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/vmware_nsx/plugin.py", line 24, in 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from vmware_nsx.plugins.nsx import plugin as nsx 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx/plugin.py", line 64, in 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from vmware_nsx.plugins.nsx_v import plugin as v 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx_v/plugin.py", line 145, in 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from vmware_nsx.services.fwaas.nsx_v import fwaas_callbacks 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/nsx_v/fwa as_callbacks.py", line 19, in 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from vmware_nsx.services.fwaas.common import fwaas_callbacks_v1 as com_c lbcks 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/common/fw aas_callbacks_v1.py", line 21, in 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from neutron_fwaas.db.firewall import firewall_db # noqa 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime ImportError: No module named neutron_fwaas.db.firewall 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime 2018-07-13 17:57:51.445 25653 ERROR neutron.manager [-] Plugin 'vmware_nsx.plugin.NsxDvsPlugin' not found. 2018-07-13 17:57:51.446 25653 DEBUG oslo_concurrency.lockutils [-] Lock "manager" released by "neutron.manager._create_instance" :: held 0 .644s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 2018-07-13 17:57:51.446 25653 ERROR neutron.service [-] Unrecoverable error: please check log for details.: ImportError: Class not found. 2018-07-13 17:57:51.446 25653 ERROR neutron.service Traceback (most recent call last): 2018-07-13 17:57:51.446 25653 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 86, in serve_wsgi 2018-07-13 17:57:51.446 25653 ERROR neutron.service service.start() 2018-07-13 17:57:51.446 25653 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 62, in start 2018-07-13 17:57:51.446 25653 ERROR neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2018-07-13 17:57:51.446 25653 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 289, in _run_wsgi 2018-07-13 17:57:51.446 25653 ERROR neutron.service app = config.load_paste_app(app_name) I have checked the configuration and plugin package with vmware openstack integration 5.0 build, seems that all things are the same, I have no idea now......... -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Jul 13 13:47:17 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 13 Jul 2018 07:47:17 -0600 Subject: [openstack-dev] [tripleo][pre] removing default ssh rule from tripleo::firewall::pre In-Reply-To: <20180713021725.6l23xllecv62m3d6@redhat.com> References: <20180713021725.6l23xllecv62m3d6@redhat.com> Message-ID: On Thu, Jul 12, 2018 at 8:17 PM, Lars Kellogg-Stedman wrote: > I've had a few operators complain about the permissive rule tripleo > creates for ssh. The current alternatives seems to be to either disable > tripleo firewall management completely, or move from the default-deny > model to a set of rules that include higher-priority blacklist rules > for ssh traffic. > > I've just submitted a pair of reviews [1] that (a) remove the default > "allow ssh from everywhere" rule in tripleo::firewall:pre and (b) add > a DefaultFirewallRules parameter to the tripleo-firewall service. > > The default value for this new parameter is the same rule that was > previously in tripleo::firewall::pre, but now it can be replaced by an > operator as part of the deployment configuration. > > For example, a deployment can include: > > parameter_defaults: > DefaultFirewallRules: > tripleo.tripleo_firewall.firewall_rules: > '003 allow ssh from internal networks': > source: '172.16.0.0/22' > proto: 'tcp' > dport: 22 > '003 allow ssh from bastion host': > source: '192.168.1.10' > proto: 'tcp' > dport: 22 > I've commented on the reviews, but for the wider audience I don't think we should completely remove these default rules. As we've switched to ansible (and ssh) being the deployment orchestration mechanism, it is important that we don't allow a user to lock themselves out of their cloud via a bad ssh rule. I think we should update the default rule to allow access over the control plane but there must be at least 1 rule that we're enforcing exist so the deployment and update processes will continue to function. Thanks, -Alex > [1] https://review.openstack.org/#/q/topic:feature/firewall%20(status:open%20OR%20status:merged) > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Fri Jul 13 13:50:01 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 13 Jul 2018 07:50:01 -0600 Subject: [openstack-dev] [kolla][nova][tripleo] Safe guest shutdowns with kolla? In-Reply-To: <835788b2-d843-672f-e07d-836ae5785f86@redhat.com> References: <153142577960.6991.11153929931053474192@ubuntu> <835788b2-d843-672f-e07d-836ae5785f86@redhat.com> Message-ID: On Fri, Jul 13, 2018 at 1:54 AM, Bogdan Dobrelya wrote: > [Added tripleo] > > It would be nice to have this situation verified/improved for containerized > libvirt for compute nodes deployed with TripleO as well. > > On 7/12/18 11:02 PM, Clint Byrum wrote: >> >> Greetings! We've been deploying with Kolla on CentOS 7 now for a while, >> and >> we've recently noticed a rather troubling behavior when we shutdown >> hypervisors. >> >> Somewhere between systemd and libvirt's systemd-machined integration, >> we see that guests get killed aggressively by SIGTERM'ing all of the >> qemu-kvm processes. This seems to happen because they are scoped into >> machine.slice, but systemd-machined is killed which drops those scopes >> and thus results in killing off the machines. > > > So far we had observed the similar [0] happening, but to systemd vs > containers managed by docker-daemon (dockerd). > > [0] https://bugs.launchpad.net/tripleo/+bug/1778913 > > >> >> In the past, we've used the libvirt-guests service when our libvirt was >> running outside of containers. This worked splendidly, as we could >> have it wait 5 minutes for VMs to attempt a graceful shutdown, avoiding >> interrupting any running processes. But this service isn't available on >> the host OS, as it won't be able to talk to libvirt inside the container. >> >> The solution I've come up with for now is this: >> >> [Unit] >> Description=Manage libvirt guests in kolla safely >> After=docker.service systemd-machined.service >> Requires=docker.service >> >> [Install] >> WantedBy=sysinit.target >> >> [Service] >> Type=oneshot >> RemainAfterExit=yes >> TimeoutStopSec=400 >> ExecStart=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh >> start >> ExecStart=/usr/bin/docker start nova_compute >> ExecStop=/usr/bin/docker stop nova_compute >> ExecStop=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh >> shutdown >> >> This doesn't seem to work, though I'm still trying to work out >> the ordering and such. It should ensure that before we stop the >> systemd-machined and destroy all of its scopes (thus, killing all the >> vms), we run the libvirt-guests.sh script to try and shut them down. The >> TimeoutStopSec=400 is because the script itself waits 300 seconds for any >> VM that refuses to shutdown cleanly, so this gives it a chance to wait >> for at least one of those. This is an imperfect solution but it allows us >> to move forward after having made a reasonable attempt at clean shutdowns. >> >> Anyway, just wondering if anybody else using kolla-ansible or kolla >> containers in general have run into this problem, and whether or not >> there are better/known solutions. > > > As I noted above, I think the issue may be valid for TripleO as well. > I think https://review.openstack.org/#/c/580351/ is trying to address this. Thanks, -Alex >> >> Thanks! >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lars at redhat.com Fri Jul 13 14:22:47 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 13 Jul 2018 10:22:47 -0400 Subject: [openstack-dev] [tripleo][pre] removing default ssh rule from tripleo::firewall::pre In-Reply-To: References: <20180713021725.6l23xllecv62m3d6@redhat.com> Message-ID: <20180713142247.kigviaieauzikkf3@redhat.com> On Fri, Jul 13, 2018 at 07:47:17AM -0600, Alex Schultz wrote: > I think we should update the default rule to allow access over the > control plane but there must be at least 1 rule that we're enforcing > exist so the deployment and update processes will continue to > function. That's makes sense. I'll update the review with that change. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From emilien at redhat.com Fri Jul 13 14:42:52 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 13 Jul 2018 10:42:52 -0400 Subject: [openstack-dev] [tripleo] Rocky blueprints In-Reply-To: References: Message-ID: On Thu, Jul 12, 2018 at 11:07 AM Bogdan Dobrelya wrote: [...] > > - > https://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud > > This needs FFE please. [...] No i don't think we need FFE for containerized undercloud. Most of the code has merged and we're switching the default in tripleoclient as of today if the patches merge (in gate today probably). So we're good on this one. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Jul 13 15:12:37 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 13 Jul 2018 10:12:37 -0500 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> <1531169863-sup-281@lrrr.local> Message-ID: On 07/12/2018 04:29 PM, Eric Fried wrote: > Here it is for nova. > > https://review.openstack.org/#/c/582392/ > >>> also don't love that immediately bumping the lower bound for tox is >>> going to be kind of disruptive to a lot of people. > > By "kind of disruptive," do you mean: > > $ tox -e blah > ERROR: MinVersionError: tox version is 1.6, required is at least 3.1.1 > $ sudo pip install --upgrade tox > > $ tox -e blah > Repeat for every developer on every project that gets updated. And if you installed tox from a distro package then it might not be that simple since pip installing over distro packages can get weird. No, it's not a huge deal, but then neither is the repetition in tox.ini so I'd just as soon leave it be for now. But I'm not going to -1 any patches either. > > ? > > Thanks, > efried > > On 07/09/2018 03:58 PM, Doug Hellmann wrote: >> Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500: >>> >>> On 07/09/2018 11:16 AM, Eric Fried wrote: >>>> Doug- >>>> >>>> How long til we can start relying on the new behavior in the gate? I >>>> gots me some basepython to purge... >>> >>> I want to point out that most projects require a rather old version of >>> tox, so chances are most people are not staying up to date with the very >>> latest version. I don't love the repetition in tox.ini right now, but I >>> also don't love that immediately bumping the lower bound for tox is >>> going to be kind of disruptive to a lot of people. >>> >>> 1: http://codesearch.openstack.org/?q=minversion&i=nope&files=tox.ini&repos= >> >> Good point. Any patches to clean up the repetition should probably >> go ahead and update that minimum version setting, too. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Fri Jul 13 15:17:45 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 13 Jul 2018 08:17:45 -0700 Subject: [openstack-dev] [octavia][ptg] Stein PTG Planning etherpad for Octavia Message-ID: Hi Octavia folks! I have created an etherpad [1] for topics at the Stein PTG in Denver. Please indicate if you will be attending or not and any topics you think we should cover. Michael [1] https://etherpad.openstack.org/p/octavia-stein-ptg From lexuns at gmail.com Fri Jul 13 16:04:59 2018 From: lexuns at gmail.com (Tong Liu) Date: Fri, 13 Jul 2018 09:04:59 -0700 Subject: [openstack-dev] Need help on this neutron-server start error with vmware_nsx plugin enable In-Reply-To: References: Message-ID: Hi Enoch, There are two issues here. 1. Plugin 'vmware_nsx.plugin.NsxDvsPlugin' cannot be found. This could be resolved by changing core_plugin to 'vmware_nsxv' as the entry point for vmware_nsxv is defined as vmware_nsxv. 2. No module named neutron_fwaas.db.firewall It looks like you are missing firewall module. Can you try to install neutron_fwaas module either from rpm or from repo? Thanks, Tong On Fri, Jul 13, 2018 at 4:10 AM Enoch Huangfu wrote: > env: > openstack queen version on centos7 > latest vmware_nsx plugin rpm installed: python-networking-vmware-nsx-12.0.1 > > when i modify 'core_plugin' value in [default] section of > /etc/neutron/neutron.conf from ml2 to vmware_nsx.plugin.NsxDvsPlugin, then > try to start neutron-server with command 'systemctl start neutron-server' > on control node, the log shows: > > 2018-07-13 17:57:50.802 25653 INFO neutron.manager [-] Loading core > plugin: vmware_nsx.plugin.NsxDvsPlugin > 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_create > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_update > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_delete > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.366 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > router_gateway before_create subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.393 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_create > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_update > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] > Subscribe: > rbac-policy before_delete > subscribe > /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime [-] Error > loading class by alias: NoMatches: No 'neutron.core_plugins' driver found, > looking for 'vmware_nsx.plugin.NsxDvsPlugin' > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime Traceback > (most recent call last): > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 46, > in load_class_by_alias_or_classname > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime > namespace, name, warn_on_missing_entrypoint=False) > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 61, in __init__ > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime > warn_on_missing_entrypoint=warn_on_missing_entrypoint > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/stevedore/named.py", line 89, in __init__ > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime > self._init_plugins(extensions) > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 113, in > _init_plugins > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime > (self.namespace, name)) > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime NoMatches: > No 'neutron.core_plugins' driver found, looking for > 'vmware_nsx.plugin.NsxDvsPlugin' > 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime [-] Error > loading class by class name: ImportError: No module named > neutron_fwaas.db.firewall > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime Traceback > (most recent call last): > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 52, > in load_class_by_alias_or_classname > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime > class_to_load = importutils.import_class(name) > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 30, in > import_class > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime > __import__(mod_str) > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/vmware_nsx/plugin.py", line 24, in > > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from > vmware_nsx.plugins.nsx import plugin as nsx > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx/plugin.py", line > 64, in > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from > vmware_nsx.plugins.nsx_v import plugin as v > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx_v/plugin.py", line > 145, in > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from > vmware_nsx.services.fwaas.nsx_v import fwaas_callbacks > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/nsx_v/fwa > as_callbacks.py", line 19, in > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from > vmware_nsx.services.fwaas.common import fwaas_callbacks_v1 as com_c > lbcks > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File > "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/common/fw > aas_callbacks_v1.py", line 21, in > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from > neutron_fwaas.db.firewall import firewall_db # noqa > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime ImportError: > No module named neutron_fwaas.db.firewall > 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime > 2018-07-13 17:57:51.445 25653 ERROR neutron.manager [-] Plugin > 'vmware_nsx.plugin.NsxDvsPlugin' not found. > 2018-07-13 17:57:51.446 25653 DEBUG oslo_concurrency.lockutils [-] Lock > "manager" released by "neutron.manager._create_instance" :: held 0 > .644s inner > /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 > 2018-07-13 17:57:51.446 25653 ERROR neutron.service [-] Unrecoverable > error: please check log for details.: ImportError: Class not found. > 2018-07-13 17:57:51.446 25653 ERROR neutron.service Traceback (most recent > call last): > 2018-07-13 17:57:51.446 25653 ERROR neutron.service File > "/usr/lib/python2.7/site-packages/neutron/service.py", line 86, in > serve_wsgi > 2018-07-13 17:57:51.446 25653 ERROR neutron.service service.start() > 2018-07-13 17:57:51.446 25653 ERROR neutron.service File > "/usr/lib/python2.7/site-packages/neutron/service.py", line 62, in start > 2018-07-13 17:57:51.446 25653 ERROR neutron.service self.wsgi_app = > _run_wsgi(self.app_name) > 2018-07-13 17:57:51.446 25653 ERROR neutron.service File > "/usr/lib/python2.7/site-packages/neutron/service.py", line 289, in > _run_wsgi > 2018-07-13 17:57:51.446 25653 ERROR neutron.service app = > config.load_paste_app(app_name) > > > > > I have checked the configuration and plugin package with vmware openstack > integration 5.0 build, seems that all things are the same, I have no idea > now......... > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Jul 13 16:18:13 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 13 Jul 2018 11:18:13 -0500 Subject: [openstack-dev] Fwd: [TIP] tox release 3.1.1 In-Reply-To: References: <1531152060-sup-5499@lrrr.local> <1eb422f7-b3f8-0715-44e5-3ed882a02be2@fried.cc> <1531169863-sup-281@lrrr.local> Message-ID: <4c30fc5c-3331-22ba-309d-8f416852ec95@fried.cc> Ben- On 07/13/2018 10:12 AM, Ben Nemec wrote: > > > On 07/12/2018 04:29 PM, Eric Fried wrote: >> Here it is for nova. >> >> https://review.openstack.org/#/c/582392/ >> >>>> also don't love that immediately bumping the lower bound for tox is >>>> going to be kind of disruptive to a lot of people. >> >> By "kind of disruptive," do you mean: >> >>   $ tox -e blah >>   ERROR: MinVersionError: tox version is 1.6, required is at least 3.1.1 >>   $ sudo pip install --upgrade tox >>   >>   $ tox -e blah >>   > > Repeat for every developer on every project that gets updated.  And if > you installed tox from a distro package then it might not be that > simple since pip installing over distro packages can get weird. Not every project; I only install tox once on my system and it works for all projects, nah? Am I missing something? Stephen commented similarly that we should wait for distros to pick up the package. WFM, nothing urgent about this. > > No, it's not a huge deal, but then neither is the repetition in > tox.ini so I'd just as soon leave it be for now.  But I'm not going to > -1 any patches either. > >> >> ? >> >> Thanks, >> efried >> >> On 07/09/2018 03:58 PM, Doug Hellmann wrote: >>> Excerpts from Ben Nemec's message of 2018-07-09 15:42:02 -0500: >>>> >>>> On 07/09/2018 11:16 AM, Eric Fried wrote: >>>>> Doug- >>>>> >>>>>      How long til we can start relying on the new behavior in the >>>>> gate?  I >>>>> gots me some basepython to purge... >>>> >>>> I want to point out that most projects require a rather old version of >>>> tox, so chances are most people are not staying up to date with the >>>> very >>>> latest version.  I don't love the repetition in tox.ini right now, >>>> but I >>>> also don't love that immediately bumping the lower bound for tox is >>>> going to be kind of disruptive to a lot of people. >>>> >>>> 1: >>>> http://codesearch.openstack.org/?q=minversion&i=nope&files=tox.ini&repos= >>> >>> Good point. Any patches to clean up the repetition should probably >>> go ahead and update that minimum version setting, too. >>> >>> Doug >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > From colleen at gazlene.net Fri Jul 13 18:33:18 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 13 Jul 2018 20:33:18 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 9 July 2018 Message-ID: <1531506798.2175801.1440016944.59C6ACD5@webmail.messagingengine.com> # Keystone Team Update - Week of 9 July 2018 ## News ### New Core Reviewer We added a new core reviewer[1]: thanks to XiYuan for stepping up to take this responsibility and for all your hard work on keystone! [1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132123.html ### Release Status This week is our scheduled feature freeze week, but we did not have quite the tumult of activity we had at feature freeze last cycle. We're pushing the auth receipts work until after the token model refactor is finished[2], to avoid the receipts model having to carry extra technical debt. The fine-grained access control feature for application credentials is also going to need to be pushed to next cycle when more of us can dedicate time to helping with it it[3]. The base work for default roles was completed[4] but the auditing of the keystone API hasn't been completed yet and is partly dependent on the flask work, so it is going to continue on into next cycle[5]. The hierarchical limits work is pretty solid but we're (likely) going to let it slide into next week so that some of the interface details can be worked out[6]. [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-10.log.html#t2018-07-10T01:39:27 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:19:08 [4] https://review.openstack.org/572243 [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:02:03 [6] https://review.openstack.org/557696 ### PTG Planning We're starting to prepare topics for the next PTG in Denver[7] so please add topics to the planning etherpad[8]. [7] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132144.html [8] https://etherpad.openstack.org/p/keystone-stein-ptg ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 20 changes this week, including several of the flask conversion patches. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 62 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. The major efforts to focus on are the token model refactor[9], the flaskification work[10], and the hierarchical project limits work[11]. [9] https://review.openstack.org/#/q/is:open+topic:bug/1778945 [10] https://review.openstack.org/#/q/is:open+topic:bug/1776504 [11] https://review.openstack.org/#/q/is:open+topic:bp/strict-two-level-model ## Bugs This week we opened 3 new bugs and closed 4. Bugs opened (3) Bug #1780532 (keystone:Undecided) opened by zheng yan https://bugs.launchpad.net/keystone/+bug/1780532 Bug #1780896 (keystone:Undecided) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1780896 Bug #1781536 (keystone:Undecided) opened by Pawan Gupta https://bugs.launchpad.net/keystone/+bug/1781536 Bugs closed (0) Bugs fixed (4) Bug #1765193 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1765193 Bug #1780159 (keystone:Medium) fixed by Sami Makki https://bugs.launchpad.net/keystone/+bug/1780159 Bug #1780896 (keystone:Undecided) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1780896 Bug #1779172 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1779172 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html This week is our scheduled feature freeze. We are likely going to make an extension for the hierarchical project limits work, pending discussion on the mailing list. Next week is the non-client final release date[12], so work happening in keystoneauth, keystonemiddleware, and our oslo libraries needs to be finished and reviewed prior to next Thursday so a release can be requested in time. [12] https://review.openstack.org/572243 ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From emilien at redhat.com Fri Jul 13 18:33:02 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 13 Jul 2018 14:33:02 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) Message-ID: Greetings, We have been supporting both Keepalived and Pacemaker to handle VIP management. Keepalived is actually the tool used by the undercloud when SSL is enabled (for SSL termination). While Pacemaker is used on the overcloud to handle VIPs but also services HA. I see some benefits at removing support for keepalived and deploying Pacemaker by default: - pacemaker can be deployed on one node (we actually do it in CI), so can be deployed on the undercloud to handle VIPs and manage HA as well. - it'll allow to extend undercloud & standalone use cases to support multinode one day, with HA and SSL, like we already have on the overcloud. - it removes the complexity of managing two tools so we'll potentially removing code in TripleO. - of course since pacemaker features from overcloud would be usable in standalone environment, but also on the undercloud. There is probably some downside, the first one is I think Keepalived is much more lightweight than Pacemaker, we probably need to run some benchmark here and make sure we don't make the undercloud heavier than it is now. I went ahead and created this blueprint for Stein: https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default I also plan to prototype some basic code soon and provide an upgrade path if we accept this blueprint. This is something I would like to discuss here and at the PTG, feel free to bring questions/concerns, Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Jul 13 18:57:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 13 Jul 2018 14:57:58 -0400 Subject: [openstack-dev] [all][ptl][release] deadline for non-client library releases 19 July Message-ID: <1531507218-sup-7131@lrrr.local> The deadline for releasing non-client libraries for Rocky is coming up next week on 19 July (https://releases.openstack.org/rocky/schedule.html). We have a few libraries that have no releases at all this cycle, which makes creating the stable branch problematic. Therefore, if the PTL or release liaison do not propose a release by next week, the release team may tag HEAD of master with an appropriate version number (raising at least the minor value) to provide a place to create the stable/rocky branch. (See Thread at http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html for the discussion of this policy. We will need to decide whether to tag based on whether there are any changes on master for the repository.) - automaton - ceilometermiddleware - debtcollector - pycadf - requestsexceptions - stevedore Below is the set of unreleased changes for known deliverables classified as non-client libraries and that look like they may warrant a release (i.e., they are not only CI configuration changes). Any items in the report below that are not also in the list above will not be tagged by the release team because we already have a tag we can use to create the stable branch. The unrelease changes for those deliverables will not be available to Rocky users unless they are backported to the stable branch. If you manage one of these deliverables, please review the list and consider proposing a release before the deadline. Doug [ Unreleased changes in openstack/automaton (master) ] Changes between 1.14.0 and ce03d76 * ce03d76 2018-07-11 09:48:53 +0700 Switch to stestr * 5d8233a 2018-06-06 14:53:48 -0400 fix tox python3 overrides * 885b3d4 2018-04-21 02:40:23 +0800 Trivial: Update pypi url to new url * 67b3093 2018-04-13 15:48:27 -0400 fix list of default virtualenvs * aa12fe5 2018-04-13 15:48:23 -0400 set default python to python3 * f8623e5 2018-03-24 21:02:01 -0400 add lower-constraints job * 84a646c 2018-03-15 06:45:10 +0000 Updated from global requirements * 85b7139 2018-01-24 17:58:52 +0000 Update reno for stable/queens * 08228a1 2018-01-24 00:48:55 +0000 Updated from global requirements * 22b2b86 2018-01-17 20:27:45 +0000 Updated from global requirements * 6191cf6 2018-01-16 04:02:08 +0000 Updated from global requirements [ Unreleased changes in openstack/ceilometermiddleware (master) ] Changes between 1.2.0 and 8332b9e * 8332b9e 2018-06-06 15:27:00 -0400 fix tox python3 overrides * 2f0efc7 2018-02-01 16:33:52 +0000 Update reno for stable/queens [ Unreleased changes in openstack/debtcollector (master) ] Changes between 1.19.0 and 858f15d * 858f15d 2018-06-06 14:53:48 -0400 fix tox python3 overrides * 2ee856f 2018-04-21 05:21:26 +0800 Trivial: Update pypi url to new url * 821478a 2018-04-16 11:16:59 -0400 remove obsolete tox environments * 03292af 2018-04-16 11:16:59 -0400 set default python to python3 * 9e2a2d1 2018-03-15 06:50:50 +0000 Updated from global requirements | * d8e616d 2018-03-24 21:02:07 -0400 add lower-constraints job | * 17ffa68 2018-03-21 18:26:09 +0530 pypy is not checked at gate |/ * 19616fc 2018-03-10 13:09:54 +0000 Updated from global requirements * b9a6777 2018-02-28 01:53:43 +0800 Update links in README * 0ff121e 2018-01-24 17:59:41 +0000 Update reno for stable/queens | * 4e6aabb 2018-01-24 00:51:19 +0000 Updated from global requirements |/ * dceacf1 2018-01-17 20:30:24 +0000 Updated from global requirements * 81d2312 2017-11-17 10:09:38 +0100 Remove setting of version/release from releasenotes [ Unreleased changes in openstack/glance_store (master) ] Changes between 0.24.0 and d50ab63 * d50ab63 2018-07-09 15:42:10 +0000 specify region on creating cinderclient * 573fde0 2018-06-06 15:27:00 -0400 fix tox python3 overrides * 5a20d47 2018-06-20 19:40:17 -0400 Deprecate store_capabilities_update_min_interval * 54b7ccb 2016-12-20 16:07:45 +1100 Disable verification for Keystone session in Swift [ Unreleased changes in openstack/ironic-lib (master) ] Changes between 2.13.0 and 9ba805d * 9ba805d 2018-07-11 18:16:12 +0700 Remove testrepository * a809787 2018-07-02 11:39:03 +0200 Expose GPT partitioning fixing method * 89fbae4 2018-06-27 14:50:06 -0400 Switch to using stestr * 68f017f 2018-06-27 12:36:05 +0200 Do not run API (functional) tests in the CI * 4e31bc5 2018-06-07 17:34:53 +0000 Remove unneccessary lambda from _test_make_partitions * e4dda71 2018-06-06 15:27:00 -0400 fix tox python3 overrides [ Unreleased changes in openstack/kuryr (master) ] Changes between 0.8.0 and cb6eef2 * 6dc4279 2018-05-11 09:33:19 +0700 Replace deprecated "auth_uri" by "www_authenticate_uri" [ Unreleased changes in openstack/mistral-lib (master) ] Changes between 0.5.0 and d1ccfd0 * d1ccfd0 2018-06-27 15:11:04 +0000 Fixed the documentation of 'run' params * df04a2b 2018-06-12 16:07:01 +0100 Add the restructuredtext check to the flake8 job * 37fea13 2018-06-06 15:27:00 -0400 fix tox python3 overrides * de6805b 2018-05-22 13:18:29 -0700 Switch to using stestr [ Unreleased changes in openstack/monasca-common (master) ] Changes between 2.10.0 and def805b * 31f9209 2018-07-10 15:08:55 +0200 Minor language changes and added license headers * 2ce968d 2018-07-09 13:49:19 +0200 Add base Dockerfile and supporting scripts * 6c91ded 2018-06-19 13:23:57 +0200 Convert README.md to ReStructuredTest format * 38e3b64 2018-06-15 08:52:20 +0200 Python3.5: Make KafkaProducer compatible with Py35 [ Unreleased changes in openstack/neutron-lib (master) ] Changes between 1.17.0 and a37d430 * a37d430 2018-07-12 13:13:21 -0600 rehome rpc and related plumbing * 5180f8f 2018-06-20 16:23:30 +0900 Cleanup unused key-value in the attribute of l3 * 51bb430 2018-06-26 23:53:41 +0200 Shim extension - segments peer subnet host routes, and api-ref [ Unreleased changes in openstack/os-brick (master) ] Changes between 2.5.1 and 9b729ef * 9b729ef 2018-06-25 10:43:14 +0300 Handle multiple errors in multipath -l parsing * 80d222a 2018-06-03 08:19:28 -0400 Switch to using stestr [ Unreleased changes in openstack/os-traits (master) ] Changes between 0.8.0 and 333d110 * 65a8daf 2018-06-13 08:18:25 -0500 normalize_name helper [ Unreleased changes in openstack/os-win (master) ] Changes between 4.0.0 and f9dc56f * f9dc56f 2018-04-11 11:52:27 -0400 uncap eventlet * 90b359d 2018-04-05 07:21:46 -0400 add lower-constraints job * c9285bd 2018-02-23 13:09:33 +0700 Removing pypy * 068d1b5 2018-03-09 11:38:52 +0800 Update links in README * 51cc7eb 2018-03-15 07:46:04 +0000 Updated from global requirements [ Unreleased changes in openstack/osc-placement (master) ] Changes between 1.1.0 and 9577cd8 * 134a463 2018-07-02 10:37:33 +0800 Fix docstring for delete allocation method | * e3a3b8b 2018-06-28 22:38:54 -0400 Remove doc/build during tox -e docs | | * 2bea1cc 2018-06-28 22:40:42 -0400 Fix the 1.6 release note format | |/ | | * 5883b82 2018-07-02 11:08:41 -0400 Allocation candidates parameter: required (v1.17) | | * 9f4e7eb 2018-07-02 11:08:41 -0400 Limit allocation candidates (v1.15, v1.16) | | * 565fb8d 2018-07-02 11:08:34 -0400 Add nested resource providers (v1.14) | |/ |/| * | f3ed1e7 2018-06-29 16:56:41 -0400 New dict format of allocations (v1.11, v1.12) * | d343dcb 2018-06-29 16:52:46 -0400 CLI allocation candidates (v1.10) * | fcc8081 2018-06-29 16:52:41 -0400 Usages per project and user (v1.8, v1.9) |/ * 06b5738 2018-06-06 15:27:01 -0400 fix tox python3 overrides * d839cd9 2018-05-17 09:54:17 -0400 Resource class set (v1.7) * 7882ed3 2018-05-17 09:54:17 -0400 Fix error message asserts in functional test * 61b08c5 2018-05-15 14:35:45 +0200 CLI for traits (v1.6) * 0a5493f 2018-05-01 13:24:19 -0400 RP delete inventories (v1.5) * 61d5173 2018-05-08 12:51:12 +0200 Fix error message in test assert [ Unreleased changes in openstack/oslo.config (master) ] Changes between 6.3.0 and 6ddee6d * 8b1a0ff 2018-06-25 10:38:54 +0200 Add example group for the URI driver * e233fc5 2018-06-25 10:17:12 +0200 Add config_source option * 9dfca14 2018-06-06 16:03:27 +0200 Create INI file ConfigurationSourceDriver. * c5e57c0 2018-05-23 17:23:06 +0200 ConfigurationSource base class * 2321729 2018-05-15 11:34:17 -0400 Base class for a configuration driver * 6a94cbc 2018-06-28 10:55:07 -0400 move configuration option list to the configuration guide [ Unreleased changes in openstack/oslo.messaging (master) ] Changes between 8.0.0 and 7dc7684 * 7dc7684 2018-07-11 15:22:21 +0200 Bump py-amqp to >= 2.3.0 * a84c946 2018-07-03 13:43:40 -0400 No longer allow redundant calls to server start() * dfb83f4 2018-07-05 06:38:38 -0500 py37: deal with Exception repr changes [ Unreleased changes in openstack/oslo.middleware (master) ] Changes between 3.35.0 and 48ec101 * 48ec101 2018-07-04 08:20:45 +0700 Switch to stestr * 8c7fa5b 2018-06-21 13:05:43 +0800 Add release notes link to README * 6e90d28 2018-06-06 14:53:49 -0400 fix tox python3 overrides * 522a7bf 2018-05-02 11:26:16 -0400 Remove stale pip-missing-reqs tox test * 0d02cb7 2018-04-21 10:57:58 +0800 Trivial: Update pypi url to new url * 2c55731 2018-04-13 16:02:29 -0400 set default python to python3 * 880f29d 2018-03-24 21:02:38 -0400 add lower-constraints job * 1c50f5e 2018-03-21 08:56:07 +0000 Updated from global requirements | * 7280af2 2018-03-21 18:21:04 +0530 pypy is not checked at gate |/ * f277b87 2018-03-02 10:30:19 +0800 Follow the new PTI for document build [ Unreleased changes in openstack/oslo.reports (master) ] Changes between 1.28.0 and 8d49f91 * 8d49f91 2018-07-03 16:05:38 +0700 Switch to stestr * 3cd1e76 2018-06-21 13:13:51 +0800 Add release notes link to README * 5d7035c 2018-05-22 02:57:55 +0000 Remove the remaining of the removed option | * 055d347 2018-06-06 14:53:49 -0400 fix tox python3 overrides |/ * 884cee9 2018-05-11 09:49:55 +0700 Replace deprecated "auth_uri" by "www_authenticate_uri" * 05f2456 2018-05-02 11:33:20 -0400 Remove stale pip-missing-reqs and pypy tox tests * 5fee38d 2018-04-21 11:07:16 +0800 Trivial: Update pypi url to new url [ Unreleased changes in openstack/ovsdbapp (master) ] Changes between 0.11.0 and f631143 * f631143 2018-07-10 16:23:46 +0700 Switch to stestr * 2ee22a5 2018-07-02 12:53:26 +0000 Fix python3 compat with debug_venv.py | * 62a6190 2018-06-29 14:52:26 +0100 Port Group's letfovers |/ * 7e980f1 2018-06-06 13:28:41 +0200 Add Port Group ACL commands * ad47adb 2018-04-23 14:44:46 +0800 Add QoS command for ovn northbound db. [ Unreleased changes in openstack/pycadf (master) ] Changes between 2.7.0 and 7df2d59 * 56797cc 2018-05-21 20:53:18 -0400 Remove moxstubout usage * 2c275b0 2018-06-06 15:27:01 -0400 fix tox python3 overrides * 4064124 2018-05-10 15:26:45 +0000 add lower-constraints job | * ca0ad03 2018-04-21 10:07:34 +0800 Trivial: Update pypi url to new url | * c5c1bbc 2018-03-15 07:53:37 +0000 Updated from global requirements |/ * 19583c0 2018-01-27 18:28:08 +0000 Updated from global requirements [ Unreleased changes in openstack-infra/shade (master) ] Changes between 1.28.0 and a8efa52 * 6099e44 2018-05-27 08:57:30 -0500 Remove shade-ansible-devel job | * 7460ad3 2018-07-03 16:31:58 -0400 Fix for passing dict for get_* methods | * e95d8e9 2018-06-24 16:36:04 -0500 Finish migrating image tests to requests-mock | * 7c9d461 2018-06-24 10:36:34 -0500 Convert image_client mocks in test_shade_operator | * 43977d1 2018-06-24 10:36:34 -0500 Convert test_caching to requests-mock | * 43e216b 2018-06-24 10:36:34 -0500 Convert domain params tests to requests_mock | * abd61fd 2018-06-24 10:36:34 -0500 Use RequestsMockTestCase everywhere | | * 1416470 2018-06-25 10:07:13 -0500 Switch bifrost jobs to nonvoting | |/ | * e1f5242 2018-06-20 14:55:33 +0800 add release notes to README.rst | * 07a4b84 2018-06-18 22:46:32 -0400 Change 'Member' role reference to 'member' | * c12ebc1 2018-06-06 15:27:01 -0400 fix tox python3 overrides * 949982c 2018-05-27 08:48:00 -0500 Update ansible test job to run against stable-2.5 | * ab28399 2018-05-21 13:27:40 -0700 Switch to iterable version of server listing (and expose iterable method) * | 60aafcc 2018-05-22 15:43:22 +0200 Allow explicitly setting enable_snat to either value |/ * dcbcfbf 2018-05-12 10:31:30 -0500 Fix recent pep8 issues * 2b48637 2018-05-04 11:30:09 -0500 Use openstack.config directly for config * f29630d 2018-04-30 08:57:47 -0400 remove redundant information from release notes build * b95b0c7 2018-04-27 13:35:35 -0500 Make name setting in connect_as more resilient * 8f99e6e 2018-04-06 01:27:16 -0400 add lower-constraints job [ Unreleased changes in openstack/stevedore (master) ] Changes between 1.28.0 and 64f70f2 * 64f70f2 2018-07-06 08:27:28 +0700 Remove unnecessary py27 testenv * 963a7d8 2018-07-05 18:04:48 +0700 Switch to stestr * 2362979 2018-06-06 16:17:02 -0400 fix tox python3 overrides * f641e9a 2018-05-01 15:53:28 +0000 Trivial: Update pypi url to new url * 68a9a4f 2018-04-21 09:21:32 +0800 Trivial: Update pypi url to new url * 4ba1d97 2018-04-13 16:15:04 -0400 set default python to python3 * 40064ea 2018-03-24 21:03:12 -0400 add lower-constraints job * d42d448 2018-03-15 09:34:12 +0000 Updated from global requirements * 95445e1 2018-03-02 17:59:50 +0800 Update links in README * 4e05b9a 2018-01-24 18:10:59 +0000 Update reno for stable/queens | * e65e119 2018-01-24 01:36:41 +0000 Updated from global requirements |/ * 8a9bcee 2018-01-18 03:35:02 +0000 Updated from global requirements * 5d0fb11 2018-01-08 12:28:50 -0600 Follow the new PTI for document build [ Unreleased changes in openstack/sushy (master) ] Changes between 1.5.0 and e540017 * 1831b87 2018-06-28 17:46:21 +0300 Remove etag from Bios * e96cb4e 2018-06-26 14:02:44 +0300 Hide Attribute Registry property in Bios * fb44452 2018-06-25 10:59:32 +0300 Introduce BIOS API [ Unreleased changes in openstack/tosca-parser (master) ] Changes between 1.0.0 and 3eb67e7 * 3eb67e7 2018-06-06 15:27:01 -0400 fix tox python3 overrides * 009e5f2 2018-06-01 11:47:48 -0500 Handle deriving from custom policy definitions * d5cacdb 2018-05-30 08:30:00 -0700 Add EXPERIMENTAL support for MEC | * 129720c 2018-05-17 22:18:44 +0900 Follow the new PTI for document build |/ * 55c0663 2018-05-16 16:38:20 +0900 Switch from oslosphinx to openstackdocstheme From jaosorior at gmail.com Fri Jul 13 19:11:53 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Fri, 13 Jul 2018 14:11:53 -0500 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: Sounds good to me. Even if pacemaker is heavier, less options and consistency is better. Greetings from Mexico :D On Fri, 13 Jul 2018, 13:33 Emilien Macchi, wrote: > Greetings, > > We have been supporting both Keepalived and Pacemaker to handle VIP > management. > Keepalived is actually the tool used by the undercloud when SSL is enabled > (for SSL termination). > While Pacemaker is used on the overcloud to handle VIPs but also services > HA. > > I see some benefits at removing support for keepalived and deploying > Pacemaker by default: > - pacemaker can be deployed on one node (we actually do it in CI), so can > be deployed on the undercloud to handle VIPs and manage HA as well. > - it'll allow to extend undercloud & standalone use cases to support > multinode one day, with HA and SSL, like we already have on the overcloud. > - it removes the complexity of managing two tools so we'll potentially > removing code in TripleO. > - of course since pacemaker features from overcloud would be usable in > standalone environment, but also on the undercloud. > > There is probably some downside, the first one is I think Keepalived is > much more lightweight than Pacemaker, we probably need to run some > benchmark here and make sure we don't make the undercloud heavier than it > is now. > > I went ahead and created this blueprint for Stein: > https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default > I also plan to prototype some basic code soon and provide an upgrade path > if we accept this blueprint. > > This is something I would like to discuss here and at the PTG, feel free > to bring questions/concerns, > Thanks! > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Jul 13 19:19:35 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 13 Jul 2018 14:19:35 -0500 Subject: [openstack-dev] [keystone] Feature Status and Exceptions Message-ID: Hey all, As noted in the weekly report [0], today is feature freeze for keystone-related specifications. I wanted to elaborate on each specification so that our plan is clear moving forward. *Unified Limits** ** *I propose that we issue a feature freeze exception for this work. Mainly because the changes are relatively isolated and low-risk. The majority of the feedback on the approach is being held up by an interface decision, which doesn't impact users, it's certainly more of a developer preference [1]. That said, I don't think it would be too ambitious to focus reviews on this next week and iron out the last few bits well before rocky-3. *Default Roles** * The implementation to ensure each of the new defaults is available after installing keystone is complete. We realized that incorporating those new roles into keystone's default policies would be a lot easier after the flask work lands [2]. Instead of doing a bunch of work to incorporate those default and then re-doing it to accommodate flask, I think we have a safe checkpoint where we are right now. We can use free cycles during the RC period to queue up those implementation, mark them with a -2, and hit the ground running in Stein. This approach feels like the safest compromise between risk and reward. *Capability Lists** * The capability lists involves a lot of work, not just within keystone, but also keystonemiddleware, which will freeze next week. I think it's reasonable to say that this will be something that has to be pushed to Stein [3]. *MFA Receipts** * Much of the code used in the existing approach uses a lot of the same patterns from the token provider API within keystone [4]. Since the UUID and SQL parts of the token provider API have been removed, we're also in the middle of cleaning up a ton of technical debt in that area [5]. Adrian seems OK giving us the opportunity to finish cleaning things up before reworking his proposal for authentication receipts. IMO, this seems totally reasonable since it will help us ensure the new code for authentication receipts doesn't have the bad patterns that have plagued us with the token provider API. Does anyone have objections to any of these proposals? If not, I can start bumping various specs to reflect the status described here. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html [1] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model [2] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 [3] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds [4] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt [5] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From hrybacki at redhat.com Fri Jul 13 19:37:08 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Fri, 13 Jul 2018 15:37:08 -0400 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: References: Message-ID: On Fri, Jul 13, 2018 at 3:20 PM Lance Bragstad wrote: > > Hey all, > > As noted in the weekly report [0], today is feature freeze for keystone-related specifications. I wanted to elaborate on each specification so that our plan is clear moving forward. > > Unified Limits > > I propose that we issue a feature freeze exception for this work. Mainly because the changes are relatively isolated and low-risk. The majority of the feedback on the approach is being held up by an interface decision, which doesn't impact users, it's certainly more of a developer preference [1]. > > That said, I don't think it would be too ambitious to focus reviews on this next week and iron out the last few bits well before rocky-3. > > Default Roles > > The implementation to ensure each of the new defaults is available after installing keystone is complete. We realized that incorporating those new roles into keystone's default policies would be a lot easier after the flask work lands [2]. Instead of doing a bunch of work to incorporate those default and then re-doing it to accommodate flask, I think we have a safe checkpoint where we are right now. We can use free cycles during the RC period to queue up those implementation, mark them with a -2, and hit the ground running in Stein. This approach feels like the safest compromise between risk and reward. > +1 to this approach. > Capability Lists > > The capability lists involves a lot of work, not just within keystone, but also keystonemiddleware, which will freeze next week. I think it's reasonable to say that this will be something that has to be pushed to Stein [3]. > > MFA Receipts > > Much of the code used in the existing approach uses a lot of the same patterns from the token provider API within keystone [4]. Since the UUID and SQL parts of the token provider API have been removed, we're also in the middle of cleaning up a ton of technical debt in that area [5]. Adrian seems OK giving us the opportunity to finish cleaning things up before reworking his proposal for authentication receipts. IMO, this seems totally reasonable since it will help us ensure the new code for authentication receipts doesn't have the bad patterns that have plagued us with the token provider API. > > > Does anyone have objections to any of these proposals? If not, I can start bumping various specs to reflect the status described here. > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html > [1] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model > [2] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 > [3] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds > [4] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt > [5] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Jul 13 20:04:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Jul 2018 15:04:05 -0500 Subject: [openstack-dev] [nova]API update week 5-11 In-Reply-To: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> References: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> Message-ID: On 7/11/2018 8:14 PM, Ghanshyam Mann wrote: > 4. Volume multiattach enhancements: > -https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements > -https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) > - Weekly Progress: mriedem mentioned in last week status mail that he will continue work on this. I failed to work on this again this week since I spent the majority of my week doing reviews. -- Thanks, Matt From mriedemos at gmail.com Fri Jul 13 20:05:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Jul 2018 15:05:25 -0500 Subject: [openstack-dev] [nova]API update week 5-11 In-Reply-To: References: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> Message-ID: <5215aef5-bc15-7e9a-d3d6-bdcbd1d505aa@gmail.com> On 7/11/2018 9:03 PM, Zhenyu Zheng wrote: > 2. Abort live migration in queued state: > -https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status > -https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) > > - Weekly Progress: Review is going and it is in nova runway this > week. In API office hour, we discussed about doing the compute > service version checks oncompute.api.py side > than on rpc side. Dan has point of doing it on rpc side where > migration status can changed to running. We decided to further > discussed it on patch. > > > This is my own defence, Dan's point seems to be that the actual rpc > version pin could be set to be lower than the can_send_version even when > the service version is new enough, so he thinks doing it in rpc is better. That series is all rebased now and I'm +2 up the stack until the API change, where I'm just +1 since I wrote the compute service version checking part, but I think this series is ready for wider review. -- Thanks, Matt From eumel at arcor.de Fri Jul 13 20:16:05 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 13 Jul 2018 22:16:05 +0200 Subject: [openstack-dev] [all][ptg] Wiki page for Etherpads Message-ID: Hello, just wondering there was not Wiki page for the upcoming PTG in Denver, so I've just created [1] and put in the links what I found here. Please check and update if required. thx Frank [1] https://wiki.openstack.org/wiki/PTG/Stein/Etherpads From mriedemos at gmail.com Fri Jul 13 20:23:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Jul 2018 15:23:55 -0500 Subject: [openstack-dev] [nova] Reminder on nova-network API removal work Message-ID: There are currently no open changes for the nova-network API removal tracked here [1] but there are at least two low-hanging fruit APIs to remove: * os-floating-ips-bulk * os-floating-ips-dns It would be nice to at least get those removed yet before the feature freeze. See one of the existing linked removal patches in the etherpad for an example of how to do this, and/or read the doc [2]. [1] https://etherpad.openstack.org/p/nova-network-removal-rocky [2] https://docs.openstack.org/nova/latest/contributor/api.html#removing-deprecated-apis -- Thanks, Matt From eumel at arcor.de Fri Jul 13 20:26:07 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 13 Jul 2018 22:26:07 +0200 Subject: [openstack-dev] [docs][i18n][ptg] PTG infos on Etherpad Message-ID: <4ff6ad70fc6127f063047fe549c81a64@arcor.de> Hello, the Docs and I18n team will also present during the PTG in Denver. The rough plan told us on Monday/Tuesday. As usually we're in the same room and will use also the same Etherpad on [1], which I have shameless copied from the last PTG in Denver, so we have all this usefull links for Station 26 already there :) Please write down your topics and your possible participation. The Etherpad from the previous PTG is also linked as a reminder. many thanks Frank [1] https://etherpad.openstack.org/p/docs-i18n-ptg-stein From jgrassler at suse.de Fri Jul 13 20:37:00 2018 From: jgrassler at suse.de (Johannes Grassler) Date: Fri, 13 Jul 2018 22:37:00 +0200 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: References: Message-ID: <20180713203700.qdifdkm7f3n47ezx@btw23.de> Hello, On Fri, Jul 13, 2018 at 02:19:35PM -0500, Lance Bragstad wrote: > *Capability Lists** > * > The capability lists involves a lot of work, not just within keystone, > but also keystonemiddleware, which will freeze next week. I think it's > reasonable to say that this will be something that has to be pushed to > Stein [3]. I was was planning to email you about that, too...I didn't have much time for it lately (rushing to get a few changes in Monasca in plus a whole bunch of packaging stuff) and with the deadline this close I didn't see much of a chance to get anything meaningful in. So +1 for Stein from my side. This time I can plan for and accomodate it by having less Monasca stuff on my plate... Cheers, Johannes -- Johannes Grassler, Cloud Developer SUSE Linux GmbH, HRB 21284 (AG Nürnberg) GF: Felix Imendörffer, Jane Smithard, Graham Norton Maxfeldstr. 5, 90409 Nürnberg, Germany From mriedemos at gmail.com Fri Jul 13 20:40:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 13 Jul 2018 15:40:14 -0500 Subject: [openstack-dev] [all][ptg] Wiki page for Etherpads In-Reply-To: References: Message-ID: <695ddcdf-dc72-d1b2-cdba-345b4248e19d@gmail.com> On 7/13/2018 3:16 PM, Frank Kloeker wrote: > Hello, > > just wondering there was not Wiki page for the upcoming PTG in Denver, > so I've just created [1] and put in the links what I found here. Please > check and update if required. > > thx > > Frank > > [1] https://wiki.openstack.org/wiki/PTG/Stein/Etherpads Thanks for doing that. -- Thanks, Matt From prometheanfire at gentoo.org Fri Jul 13 20:40:57 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 13 Jul 2018 15:40:57 -0500 Subject: [openstack-dev] [all][ptl][release] deadline for non-client library releases 19 July In-Reply-To: <1531507218-sup-7131@lrrr.local> References: <1531507218-sup-7131@lrrr.local> Message-ID: <20180713204057.cq6llnyc43zpuxg7@gentoo.org> On 18-07-13 14:57:58, Doug Hellmann wrote: > The deadline for releasing non-client libraries for Rocky is coming up > next week on 19 July (https://releases.openstack.org/rocky/schedule.html). > > We have a few libraries that have no releases at all this cycle, > which makes creating the stable branch problematic. Therefore, if > the PTL or release liaison do not propose a release by next week, > the release team may tag HEAD of master with an appropriate version > number (raising at least the minor value) to provide a place to > create the stable/rocky branch. (See Thread at > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131341.html > for the discussion of this policy. We will need to decide whether > to tag based on whether there are any changes on master for the > repository.) > > - automaton > - ceilometermiddleware > - debtcollector > - pycadf > - requestsexceptions > - stevedore > > Below is the set of unreleased changes for known deliverables > classified as non-client libraries and that look like they may > warrant a release (i.e., they are not only CI configuration changes). > Any items in the report below that are not also in the list above > will not be tagged by the release team because we already have a > tag we can use to create the stable branch. The unrelease changes > for those deliverables will not be available to Rocky users unless > they are backported to the stable branch. > > If you manage one of these deliverables, please review the list and > consider proposing a release before the deadline. > > Doug > > > [ Unreleased changes in openstack/automaton (master) ] > > Changes between 1.14.0 and ce03d76 > > * ce03d76 2018-07-11 09:48:53 +0700 Switch to stestr > * 5d8233a 2018-06-06 14:53:48 -0400 fix tox python3 overrides > * 885b3d4 2018-04-21 02:40:23 +0800 Trivial: Update pypi url to new url > * 67b3093 2018-04-13 15:48:27 -0400 fix list of default virtualenvs > * aa12fe5 2018-04-13 15:48:23 -0400 set default python to python3 > * f8623e5 2018-03-24 21:02:01 -0400 add lower-constraints job > * 84a646c 2018-03-15 06:45:10 +0000 Updated from global requirements > * 85b7139 2018-01-24 17:58:52 +0000 Update reno for stable/queens > * 08228a1 2018-01-24 00:48:55 +0000 Updated from global requirements > * 22b2b86 2018-01-17 20:27:45 +0000 Updated from global requirements > * 6191cf6 2018-01-16 04:02:08 +0000 Updated from global requirements > > > > [ Unreleased changes in openstack/ceilometermiddleware (master) ] > > Changes between 1.2.0 and 8332b9e > > * 8332b9e 2018-06-06 15:27:00 -0400 fix tox python3 overrides > * 2f0efc7 2018-02-01 16:33:52 +0000 Update reno for stable/queens > > > > > [ Unreleased changes in openstack/debtcollector (master) ] > > Changes between 1.19.0 and 858f15d > > * 858f15d 2018-06-06 14:53:48 -0400 fix tox python3 overrides > * 2ee856f 2018-04-21 05:21:26 +0800 Trivial: Update pypi url to new url > * 821478a 2018-04-16 11:16:59 -0400 remove obsolete tox environments > * 03292af 2018-04-16 11:16:59 -0400 set default python to python3 > * 9e2a2d1 2018-03-15 06:50:50 +0000 Updated from global requirements > | * d8e616d 2018-03-24 21:02:07 -0400 add lower-constraints job > | * 17ffa68 2018-03-21 18:26:09 +0530 pypy is not checked at gate > |/ > * 19616fc 2018-03-10 13:09:54 +0000 Updated from global requirements > * b9a6777 2018-02-28 01:53:43 +0800 Update links in README > * 0ff121e 2018-01-24 17:59:41 +0000 Update reno for stable/queens > | * 4e6aabb 2018-01-24 00:51:19 +0000 Updated from global requirements > |/ > * dceacf1 2018-01-17 20:30:24 +0000 Updated from global requirements > * 81d2312 2017-11-17 10:09:38 +0100 Remove setting of version/release from releasenotes > > > > [ Unreleased changes in openstack/glance_store (master) ] > > Changes between 0.24.0 and d50ab63 > > * d50ab63 2018-07-09 15:42:10 +0000 specify region on creating cinderclient > * 573fde0 2018-06-06 15:27:00 -0400 fix tox python3 overrides > * 5a20d47 2018-06-20 19:40:17 -0400 Deprecate store_capabilities_update_min_interval > * 54b7ccb 2016-12-20 16:07:45 +1100 Disable verification for Keystone session in Swift > > > > > [ Unreleased changes in openstack/ironic-lib (master) ] > > Changes between 2.13.0 and 9ba805d > > * 9ba805d 2018-07-11 18:16:12 +0700 Remove testrepository > * a809787 2018-07-02 11:39:03 +0200 Expose GPT partitioning fixing method > * 89fbae4 2018-06-27 14:50:06 -0400 Switch to using stestr > * 68f017f 2018-06-27 12:36:05 +0200 Do not run API (functional) tests in the CI > * 4e31bc5 2018-06-07 17:34:53 +0000 Remove unneccessary lambda from _test_make_partitions > * e4dda71 2018-06-06 15:27:00 -0400 fix tox python3 overrides > > > > [ Unreleased changes in openstack/kuryr (master) ] > > Changes between 0.8.0 and cb6eef2 > > * 6dc4279 2018-05-11 09:33:19 +0700 Replace deprecated "auth_uri" by "www_authenticate_uri" > > > > [ Unreleased changes in openstack/mistral-lib (master) ] > > Changes between 0.5.0 and d1ccfd0 > > * d1ccfd0 2018-06-27 15:11:04 +0000 Fixed the documentation of 'run' params > * df04a2b 2018-06-12 16:07:01 +0100 Add the restructuredtext check to the flake8 job > * 37fea13 2018-06-06 15:27:00 -0400 fix tox python3 overrides > * de6805b 2018-05-22 13:18:29 -0700 Switch to using stestr > > > > [ Unreleased changes in openstack/monasca-common (master) ] > > Changes between 2.10.0 and def805b > > * 31f9209 2018-07-10 15:08:55 +0200 Minor language changes and added license headers > * 2ce968d 2018-07-09 13:49:19 +0200 Add base Dockerfile and supporting scripts > * 6c91ded 2018-06-19 13:23:57 +0200 Convert README.md to ReStructuredTest format > * 38e3b64 2018-06-15 08:52:20 +0200 Python3.5: Make KafkaProducer compatible with Py35 > > > > [ Unreleased changes in openstack/neutron-lib (master) ] > > Changes between 1.17.0 and a37d430 > > * a37d430 2018-07-12 13:13:21 -0600 rehome rpc and related plumbing > * 5180f8f 2018-06-20 16:23:30 +0900 Cleanup unused key-value in the attribute of l3 > * 51bb430 2018-06-26 23:53:41 +0200 Shim extension - segments peer subnet host routes, and api-ref > > > > > [ Unreleased changes in openstack/os-brick (master) ] > > Changes between 2.5.1 and 9b729ef > > * 9b729ef 2018-06-25 10:43:14 +0300 Handle multiple errors in multipath -l parsing > * 80d222a 2018-06-03 08:19:28 -0400 Switch to using stestr > > > > [ Unreleased changes in openstack/os-traits (master) ] > > Changes between 0.8.0 and 333d110 > > * 65a8daf 2018-06-13 08:18:25 -0500 normalize_name helper > > > > [ Unreleased changes in openstack/os-win (master) ] > > Changes between 4.0.0 and f9dc56f > > * f9dc56f 2018-04-11 11:52:27 -0400 uncap eventlet > * 90b359d 2018-04-05 07:21:46 -0400 add lower-constraints job > * c9285bd 2018-02-23 13:09:33 +0700 Removing pypy > * 068d1b5 2018-03-09 11:38:52 +0800 Update links in README > * 51cc7eb 2018-03-15 07:46:04 +0000 Updated from global requirements > > > > > [ Unreleased changes in openstack/osc-placement (master) ] > > Changes between 1.1.0 and 9577cd8 > > * 134a463 2018-07-02 10:37:33 +0800 Fix docstring for delete allocation method > | * e3a3b8b 2018-06-28 22:38:54 -0400 Remove doc/build during tox -e docs > | | * 2bea1cc 2018-06-28 22:40:42 -0400 Fix the 1.6 release note format > | |/ > | | * 5883b82 2018-07-02 11:08:41 -0400 Allocation candidates parameter: required (v1.17) > | | * 9f4e7eb 2018-07-02 11:08:41 -0400 Limit allocation candidates (v1.15, v1.16) > | | * 565fb8d 2018-07-02 11:08:34 -0400 Add nested resource providers (v1.14) > | |/ > |/| > * | f3ed1e7 2018-06-29 16:56:41 -0400 New dict format of allocations (v1.11, v1.12) > * | d343dcb 2018-06-29 16:52:46 -0400 CLI allocation candidates (v1.10) > * | fcc8081 2018-06-29 16:52:41 -0400 Usages per project and user (v1.8, v1.9) > |/ > * 06b5738 2018-06-06 15:27:01 -0400 fix tox python3 overrides > * d839cd9 2018-05-17 09:54:17 -0400 Resource class set (v1.7) > * 7882ed3 2018-05-17 09:54:17 -0400 Fix error message asserts in functional test > * 61b08c5 2018-05-15 14:35:45 +0200 CLI for traits (v1.6) > * 0a5493f 2018-05-01 13:24:19 -0400 RP delete inventories (v1.5) > * 61d5173 2018-05-08 12:51:12 +0200 Fix error message in test assert > > > > [ Unreleased changes in openstack/oslo.config (master) ] > > Changes between 6.3.0 and 6ddee6d > > * 8b1a0ff 2018-06-25 10:38:54 +0200 Add example group for the URI driver > * e233fc5 2018-06-25 10:17:12 +0200 Add config_source option > * 9dfca14 2018-06-06 16:03:27 +0200 Create INI file ConfigurationSourceDriver. > * c5e57c0 2018-05-23 17:23:06 +0200 ConfigurationSource base class > * 2321729 2018-05-15 11:34:17 -0400 Base class for a configuration driver > * 6a94cbc 2018-06-28 10:55:07 -0400 move configuration option list to the configuration guide > > > > > [ Unreleased changes in openstack/oslo.messaging (master) ] > > Changes between 8.0.0 and 7dc7684 > > * 7dc7684 2018-07-11 15:22:21 +0200 Bump py-amqp to >= 2.3.0 > * a84c946 2018-07-03 13:43:40 -0400 No longer allow redundant calls to server start() > * dfb83f4 2018-07-05 06:38:38 -0500 py37: deal with Exception repr changes > > > > [ Unreleased changes in openstack/oslo.middleware (master) ] > > Changes between 3.35.0 and 48ec101 > > * 48ec101 2018-07-04 08:20:45 +0700 Switch to stestr > * 8c7fa5b 2018-06-21 13:05:43 +0800 Add release notes link to README > * 6e90d28 2018-06-06 14:53:49 -0400 fix tox python3 overrides > * 522a7bf 2018-05-02 11:26:16 -0400 Remove stale pip-missing-reqs tox test > * 0d02cb7 2018-04-21 10:57:58 +0800 Trivial: Update pypi url to new url > * 2c55731 2018-04-13 16:02:29 -0400 set default python to python3 > * 880f29d 2018-03-24 21:02:38 -0400 add lower-constraints job > * 1c50f5e 2018-03-21 08:56:07 +0000 Updated from global requirements > | * 7280af2 2018-03-21 18:21:04 +0530 pypy is not checked at gate > |/ > * f277b87 2018-03-02 10:30:19 +0800 Follow the new PTI for document build > > > > [ Unreleased changes in openstack/oslo.reports (master) ] > > Changes between 1.28.0 and 8d49f91 > > * 8d49f91 2018-07-03 16:05:38 +0700 Switch to stestr > * 3cd1e76 2018-06-21 13:13:51 +0800 Add release notes link to README > * 5d7035c 2018-05-22 02:57:55 +0000 Remove the remaining of the removed option > | * 055d347 2018-06-06 14:53:49 -0400 fix tox python3 overrides > |/ > * 884cee9 2018-05-11 09:49:55 +0700 Replace deprecated "auth_uri" by "www_authenticate_uri" > * 05f2456 2018-05-02 11:33:20 -0400 Remove stale pip-missing-reqs and pypy tox tests > * 5fee38d 2018-04-21 11:07:16 +0800 Trivial: Update pypi url to new url > > > > > [ Unreleased changes in openstack/ovsdbapp (master) ] > > Changes between 0.11.0 and f631143 > > * f631143 2018-07-10 16:23:46 +0700 Switch to stestr > * 2ee22a5 2018-07-02 12:53:26 +0000 Fix python3 compat with debug_venv.py > | * 62a6190 2018-06-29 14:52:26 +0100 Port Group's letfovers > |/ > * 7e980f1 2018-06-06 13:28:41 +0200 Add Port Group ACL commands > * ad47adb 2018-04-23 14:44:46 +0800 Add QoS command for ovn northbound db. > > > > [ Unreleased changes in openstack/pycadf (master) ] > > Changes between 2.7.0 and 7df2d59 > > * 56797cc 2018-05-21 20:53:18 -0400 Remove moxstubout usage > * 2c275b0 2018-06-06 15:27:01 -0400 fix tox python3 overrides > * 4064124 2018-05-10 15:26:45 +0000 add lower-constraints job > | * ca0ad03 2018-04-21 10:07:34 +0800 Trivial: Update pypi url to new url > | * c5c1bbc 2018-03-15 07:53:37 +0000 Updated from global requirements > |/ > * 19583c0 2018-01-27 18:28:08 +0000 Updated from global requirements > > > > [ Unreleased changes in openstack-infra/shade (master) ] > > Changes between 1.28.0 and a8efa52 > > * 6099e44 2018-05-27 08:57:30 -0500 Remove shade-ansible-devel job > | * 7460ad3 2018-07-03 16:31:58 -0400 Fix for passing dict for get_* methods > | * e95d8e9 2018-06-24 16:36:04 -0500 Finish migrating image tests to requests-mock > | * 7c9d461 2018-06-24 10:36:34 -0500 Convert image_client mocks in test_shade_operator > | * 43977d1 2018-06-24 10:36:34 -0500 Convert test_caching to requests-mock > | * 43e216b 2018-06-24 10:36:34 -0500 Convert domain params tests to requests_mock > | * abd61fd 2018-06-24 10:36:34 -0500 Use RequestsMockTestCase everywhere > | | * 1416470 2018-06-25 10:07:13 -0500 Switch bifrost jobs to nonvoting > | |/ > | * e1f5242 2018-06-20 14:55:33 +0800 add release notes to README.rst > | * 07a4b84 2018-06-18 22:46:32 -0400 Change 'Member' role reference to 'member' > | * c12ebc1 2018-06-06 15:27:01 -0400 fix tox python3 overrides > * 949982c 2018-05-27 08:48:00 -0500 Update ansible test job to run against stable-2.5 > | * ab28399 2018-05-21 13:27:40 -0700 Switch to iterable version of server listing (and expose iterable method) > * | 60aafcc 2018-05-22 15:43:22 +0200 Allow explicitly setting enable_snat to either value > |/ > * dcbcfbf 2018-05-12 10:31:30 -0500 Fix recent pep8 issues > * 2b48637 2018-05-04 11:30:09 -0500 Use openstack.config directly for config > * f29630d 2018-04-30 08:57:47 -0400 remove redundant information from release notes build > * b95b0c7 2018-04-27 13:35:35 -0500 Make name setting in connect_as more resilient > * 8f99e6e 2018-04-06 01:27:16 -0400 add lower-constraints job > > > > [ Unreleased changes in openstack/stevedore (master) ] > > Changes between 1.28.0 and 64f70f2 > > * 64f70f2 2018-07-06 08:27:28 +0700 Remove unnecessary py27 testenv > * 963a7d8 2018-07-05 18:04:48 +0700 Switch to stestr > * 2362979 2018-06-06 16:17:02 -0400 fix tox python3 overrides > * f641e9a 2018-05-01 15:53:28 +0000 Trivial: Update pypi url to new url > * 68a9a4f 2018-04-21 09:21:32 +0800 Trivial: Update pypi url to new url > * 4ba1d97 2018-04-13 16:15:04 -0400 set default python to python3 > * 40064ea 2018-03-24 21:03:12 -0400 add lower-constraints job > * d42d448 2018-03-15 09:34:12 +0000 Updated from global requirements > * 95445e1 2018-03-02 17:59:50 +0800 Update links in README > * 4e05b9a 2018-01-24 18:10:59 +0000 Update reno for stable/queens > | * e65e119 2018-01-24 01:36:41 +0000 Updated from global requirements > |/ > * 8a9bcee 2018-01-18 03:35:02 +0000 Updated from global requirements > * 5d0fb11 2018-01-08 12:28:50 -0600 Follow the new PTI for document build > > > > [ Unreleased changes in openstack/sushy (master) ] > > Changes between 1.5.0 and e540017 > > * 1831b87 2018-06-28 17:46:21 +0300 Remove etag from Bios > * e96cb4e 2018-06-26 14:02:44 +0300 Hide Attribute Registry property in Bios > * fb44452 2018-06-25 10:59:32 +0300 Introduce BIOS API > > > > [ Unreleased changes in openstack/tosca-parser (master) ] > > Changes between 1.0.0 and 3eb67e7 > > * 3eb67e7 2018-06-06 15:27:01 -0400 fix tox python3 overrides > * 009e5f2 2018-06-01 11:47:48 -0500 Handle deriving from custom policy definitions > * d5cacdb 2018-05-30 08:30:00 -0700 Add EXPERIMENTAL support for MEC > | * 129720c 2018-05-17 22:18:44 +0900 Follow the new PTI for document build > |/ > * 55c0663 2018-05-16 16:38:20 +0900 Switch from oslosphinx to openstackdocstheme > we seem to have a lot of projects with requirements updates not merged as well. If those updates are wanted they should be merged and a release made. https://review.openstack.org/#/q/is:open+branch:master+owner:proposal-bot+topic:openstack/requirements -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lbragstad at gmail.com Fri Jul 13 20:50:33 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 13 Jul 2018 15:50:33 -0500 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: <20180713203700.qdifdkm7f3n47ezx@btw23.de> References: <20180713203700.qdifdkm7f3n47ezx@btw23.de> Message-ID: On 07/13/2018 03:37 PM, Johannes Grassler wrote: > Hello, > > On Fri, Jul 13, 2018 at 02:19:35PM -0500, Lance Bragstad wrote: >> *Capability Lists** >> * >> The capability lists involves a lot of work, not just within keystone, >> but also keystonemiddleware, which will freeze next week. I think it's >> reasonable to say that this will be something that has to be pushed to >> Stein [3]. > I was was planning to email you about that, too...I didn't have much > time for it lately (rushing to get a few changes in Monasca in plus a > whole bunch of packaging stuff) and with the deadline this close I > didn't see much of a chance to get anything meaningful in. > > So +1 for Stein from my side. This time I can plan for and accomodate it > by having less Monasca stuff on my plate... +1 Thanks for confirming. There still seems to be quite a bit of discussion around the data model and layout. We can use the PTG to focus on that as a group if needed (and if you'll be there). > > Cheers, > > Johannes > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Fri Jul 13 20:59:06 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 13 Jul 2018 15:59:06 -0500 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: References: Message-ID: <29f22999-e25e-6a9c-6f9e-2eb0edc6b6d4@gmail.com> On 07/13/2018 02:37 PM, Harry Rybacki wrote: > On Fri, Jul 13, 2018 at 3:20 PM Lance Bragstad wrote: >> Hey all, >> >> As noted in the weekly report [0], today is feature freeze for keystone-related specifications. I wanted to elaborate on each specification so that our plan is clear moving forward. >> >> Unified Limits >> >> I propose that we issue a feature freeze exception for this work. Mainly because the changes are relatively isolated and low-risk. The majority of the feedback on the approach is being held up by an interface decision, which doesn't impact users, it's certainly more of a developer preference [1]. >> >> That said, I don't think it would be too ambitious to focus reviews on this next week and iron out the last few bits well before rocky-3. >> >> Default Roles >> >> The implementation to ensure each of the new defaults is available after installing keystone is complete. We realized that incorporating those new roles into keystone's default policies would be a lot easier after the flask work lands [2]. Instead of doing a bunch of work to incorporate those default and then re-doing it to accommodate flask, I think we have a safe checkpoint where we are right now. We can use free cycles during the RC period to queue up those implementation, mark them with a -2, and hit the ground running in Stein. This approach feels like the safest compromise between risk and reward. >> > +1 to this approach. I've proposed a couple updates to the specification, trying to clarify exactly what was implemented in the release [0]. [0] https://review.openstack.org/#/c/582673/ > >> Capability Lists >> >> The capability lists involves a lot of work, not just within keystone, but also keystonemiddleware, which will freeze next week. I think it's reasonable to say that this will be something that has to be pushed to Stein [3]. >> >> MFA Receipts >> >> Much of the code used in the existing approach uses a lot of the same patterns from the token provider API within keystone [4]. Since the UUID and SQL parts of the token provider API have been removed, we're also in the middle of cleaning up a ton of technical debt in that area [5]. Adrian seems OK giving us the opportunity to finish cleaning things up before reworking his proposal for authentication receipts. IMO, this seems totally reasonable since it will help us ensure the new code for authentication receipts doesn't have the bad patterns that have plagued us with the token provider API. >> >> >> Does anyone have objections to any of these proposals? If not, I can start bumping various specs to reflect the status described here. >> >> >> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html >> [1] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model >> [2] https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 >> [3] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds >> [4] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt >> [5] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jgrassler at suse.de Fri Jul 13 21:16:22 2018 From: jgrassler at suse.de (Johannes Grassler) Date: Fri, 13 Jul 2018 23:16:22 +0200 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: References: <20180713203700.qdifdkm7f3n47ezx@btw23.de> Message-ID: <20180713211622.pndphc6qunnyhsw4@btw23.de> Hello, On Fri, Jul 13, 2018 at 03:50:33PM -0500, Lance Bragstad wrote: > On 07/13/2018 03:37 PM, Johannes Grassler wrote: > > On Fri, Jul 13, 2018 at 02:19:35PM -0500, Lance Bragstad wrote: > >> *Capability Lists** > >> * > >> The capability lists involves a lot of work, not just within keystone, > >> but also keystonemiddleware, which will freeze next week. I think it's > >> reasonable to say that this will be something that has to be pushed to > >> Stein [3]. > > I was was planning to email you about that, too...I didn't have much > > time for it lately (rushing to get a few changes in Monasca in plus a > > whole bunch of packaging stuff) and with the deadline this close I > > didn't see much of a chance to get anything meaningful in. > > > > So +1 for Stein from my side. This time I can plan for and accomodate it > > by having less Monasca stuff on my plate... > > +1 > > Thanks for confirming. There still seems to be quite a bit of discussion > around the data model and layout. We can use the PTG to focus on that as > a group if needed (and if you'll be there). For now I'll try to remain cautiously optimistic that this discussion can mostly be resolved by starting from the controller end and making my way to the data model from that side, as people suggested :-) As for the PTG: until now I was planning on skipping it. Lots of travel already this year and I need some quiet time without jetlag to work on the code, too... Cheers, Johannes -- Johannes Grassler, Cloud Developer SUSE Linux GmbH, HRB 21284 (AG Nürnberg) GF: Felix Imendörffer, Jane Smithard, Graham Norton Maxfeldstr. 5, 90409 Nürnberg, Germany From hongbin034 at gmail.com Sat Jul 14 03:34:48 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 13 Jul 2018 23:34:48 -0400 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hi Amy, First, I want to confirm which version of devstack you were using? (go to the devstack folder and type "git log -1"). If possible, I would suggest to do the following steps: * Run ./unstack * Run ./clean * Pull down the latest version of devstack (if it is too old) * Pull down the latest version of all the projects under /opt/stack/ * Run ./stack If above steps couldn't resolve the problem, please let me know. Best regards, Hongbin On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: > Hongbin, > > Let me know if you still want me to mail the dev list, but here are the > gists for the installations and the broken CLI I mentioned > > local.conf - which is basically the developer quickstart instructions for > Zun > > https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea > > > This is the failure with a fresh devstack installation > > https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 > > > Requirements repo change a few weeks ago > > > http://git.openstack.org/cgit/openstack/requirements/commit/?id=cb6c00c01f82537a38bd0c5a560183735cefe2f9 > > > Changed local Flask version for curry-libnetwork and set local.conf to > reclone=no and then installed and tried to use the CLI. > > https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 > > > It makes sense that Flask would cause an issue on the UI installation even > though it's enabled even for a non-enabled build according to the > quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring > it up to the current requirements. I don't however know where to start > troubleshooting the 401 issue. On a different machine I have decstack with > Zun but no zun-ui and the CLI responds correctly. > > > Thanks, > > Amy (spotz) > > > On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu wrote: > >> Hi Amy, >> >> I am also in doubts about the Flask version issue. Perhaps you can >> provide more details about this issue? Do you see any error message? >> >> Best regards, >> Hongbin >> >> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >> >>> >>> Hi Amy, >>> >>> Thank you for sharing the issues. Zun UI does not require >>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>> Flask. So I wonder why install failure occurred by Zun UI. >>> >>> Could you share your correction for requrements. >>> >>> Unfortunately, I'm in trouble on my development environment since >>> yesterday. So I can not investigate the issues quickly. >>> I added Hongbin to this topic, he would help us. >>> >>> Best regards, >>> Shu Muto >>> >>> 2018年7月13日(金) 9:29 Amy Marrich : >>> >>>> Hi, >>>> >>>> I was given your email on the #openstack-zun channel as a source for >>>> questions for the UI. I've found a few issues installing the Master branch >>>> on devstack and not sure if they should be bugged. >>>> >>>> kuryr-libnetwork has incorrect versions for Flask in both >>>> lower-constraints.txt and requirements.txt, this only affects installation >>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>> confirmed as an issue. >>>> >>>> Once correcting the requirements locally to complete the devstack >>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>> client. I'm also unable to create a container within Horizon. The same >>>> credentials work fine for other OpenStack commands. >>>> >>>> On another server without the ui enabled I can use both the CLI and >>>> client no issues. I'm not sure if there's something missing on >>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html or >>>> some other underlying issue. >>>> >>>> Any help or thoughts appreciated! >>>> >>>> Thanks, >>>> >>>> Amy (spotz) >>>> >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.glazyrin.dev at gmail.com Sat Jul 14 13:29:38 2018 From: sergey.glazyrin.dev at gmail.com (Sergey Glazyrin) Date: Sat, 14 Jul 2018 15:29:38 +0200 Subject: [openstack-dev] [kolla-ansible] how do I unify log data format Message-ID: Hello guys! We are migrating our product to kolla-ansible and as far as probably you know, it uses fluentd to control logs, etc. In non containerized openstack we use rsyslog to send data to logstash. We get data from syslog events. It looks like it's impossible to use syslog in kolla-ansible. Unfortunately external_syslog_server option doesn't work. Is there anyone who was able to use it ? But, nevermind, we may use fluentd BUT.. we have one problem - different data format for each service/container. So, probably the most optimal solution is to use default logging idea in kolla-ansible. (to be honest, I am not sure... but I've no found better option). But even with default logging idea in kolla - ansible we have one serious problem. Fluentd has different data format for each service, for instance, you may see this link with explanation how its designed in kolla-ansible https://github.com/openstack/kolla-ansible/commit/3026cef7cfd1828a27e565d4211692f0ab0ce22e there are grok patterns which parses log messages, etc so, we managed to put data to elasticsearch but we need to solve two problems: 1. unify data format for log events. We may solve it using logstash to unify it before putting it to elasticsearch (or should we change fluentd configs in our own version of kolla-ansible repository ? ) For instance, we may do it using this logstash plugin https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-mutate.html#plugins-filters-mutate-rename What's your suggestion ? -- Best, Sergey -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmeggins at redhat.com Sat Jul 14 16:36:27 2018 From: rmeggins at redhat.com (Rich Megginson) Date: Sat, 14 Jul 2018 10:36:27 -0600 Subject: [openstack-dev] [kolla-ansible] how do I unify log data format In-Reply-To: References: Message-ID: On 07/14/2018 07:29 AM, Sergey Glazyrin wrote: > Hello guys! > We are migrating our product to kolla-ansible and as far as probably > you know, it uses fluentd to control logs, etc. In non containerized > openstack we use rsyslog to send data to logstash. Why not use rsyslog in containerized openstack too? Why not use rsyslog to mutate/unify the records?  Why use logstash? Note that rsyslog can send records to elasticsearch, and the latest rsyslog 8.36 has enhanced the elasticsearch plugin to do client cert auth as well as handle bulk index retries more efficiently. > We get data from syslog events. It looks like it's impossible to use > syslog in kolla-ansible. Unfortunately external_syslog_server option > doesn't work. Is there anyone who was able to use it ? But, nevermind, > we may use fluentd BUT.. we have one problem - different data format > for each service/container. > > So, probably the most optimal solution is to use default logging idea > in kolla-ansible. (to be honest, I am not sure... but I've no found > better option). But even with default logging idea in kolla - ansible > we have one serious problem. Fluentd has different data format for > each service, for instance, you may see this link with explanation how > its designed in kolla-ansible > https://github.com/openstack/kolla-ansible/commit/3026cef7cfd1828a27e565d4211692f0ab0ce22e > there are grok patterns which parses log messages, etc > > so, we managed to put data to elasticsearch but we need to solve two > problems: > 1. unify data format for log events. We may solve it using logstash to > unify it before putting it to elasticsearch (or should we change > fluentd configs in our own version of kolla-ansible repository ? ) > For instance, we may do it using this logstash plugin > https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-mutate.html#plugins-filters-mutate-rename > > What's your suggestion ? > > > -- > Best, Sergey > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Sat Jul 14 21:15:47 2018 From: amy at demarco.com (Amy Marrich) Date: Sat, 14 Jul 2018 16:15:47 -0500 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hongbin, This was a fresh install from master this week commit 6312db47e9141acd33142ae857bdeeb92c59994e Merge: ef35713 2742875 Author: Zuul Date: Wed Jul 11 20:36:12 2018 +0000 Merge "Cleanup keystone's removed config options" Except for builds with my patching kuryr-libnetwork locally builds have been done with reclone and fresh /opt/stack directories. Patch has been submitted for the Flask issue https://review.openstack.org/582634 but hasn't passed the gates yet. Following the instructions above on a new pull of devstack: commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0 Author: OpenStack Proposal Bot Date: Thu Jul 12 06:17:32 2018 +0000 Updated from generate-devstack-plugins-list Change-Id: I8f702373c76953a0a29285f410d368c975ba4024 I'm still able to use the openstack CLI for non-Zun commands but 401 on Zun root at zunui:~# openstack service list +----------------------------------+------------------+------------------+ | ID | Name | Type | +----------------------------------+------------------+------------------+ | 06be414af2fd4d59af8de0ccff78149e | placement | placement | | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute | | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation | | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder | block-storage | | 6e06ac1394ee4872aa134081d190f18e | neutron | network | | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork | kuryr-libnetwork | | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3 | | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy | compute_legacy | | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2 | | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity | | d9ea196e9cae4b0691f6c4b619eb47c9 | zun | container | | e528282e291f4ddbaaac6d6c82a0036e | cinder | volume | | e6078b2c01184f88a784b390f0b28263 | glance | image | | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration | +----------------------------------+------------------+------------------+ root at zunui:~# openstack appcontainer list Unauthorized (HTTP 401) (Request-ID: req-e44f5caf-642c-4435-ab1d- 98feae1fada9) root at zunui:~# zun list ERROR: Unauthorized (HTTP 401) (Request-ID: req-587e39d6-463f-4921-b45b- 29576a00c242) Thanks, Amy (spotz) On Fri, Jul 13, 2018 at 10:34 PM, Hongbin Lu wrote: > Hi Amy, > > First, I want to confirm which version of devstack you were using? (go to > the devstack folder and type "git log -1"). > > If possible, I would suggest to do the following steps: > > * Run ./unstack > * Run ./clean > * Pull down the latest version of devstack (if it is too old) > * Pull down the latest version of all the projects under /opt/stack/ > * Run ./stack > > If above steps couldn't resolve the problem, please let me know. > > Best regards, > Hongbin > > > On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: > >> Hongbin, >> >> Let me know if you still want me to mail the dev list, but here are the >> gists for the installations and the broken CLI I mentioned >> >> local.conf - which is basically the developer quickstart instructions for >> Zun >> >> https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea >> >> >> This is the failure with a fresh devstack installation >> >> https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 >> >> >> Requirements repo change a few weeks ago >> >> http://git.openstack.org/cgit/openstack/requirements/commit/?id= >> cb6c00c01f82537a38bd0c5a560183735cefe2f9 >> >> >> Changed local Flask version for curry-libnetwork and set local.conf to >> reclone=no and then installed and tried to use the CLI. >> >> https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 >> >> >> It makes sense that Flask would cause an issue on the UI installation >> even though it's enabled even for a non-enabled build according to the >> quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring >> it up to the current requirements. I don't however know where to start >> troubleshooting the 401 issue. On a different machine I have decstack with >> Zun but no zun-ui and the CLI responds correctly. >> >> >> Thanks, >> >> Amy (spotz) >> >> >> On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu >> wrote: >> >>> Hi Amy, >>> >>> I am also in doubts about the Flask version issue. Perhaps you can >>> provide more details about this issue? Do you see any error message? >>> >>> Best regards, >>> Hongbin >>> >>> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >>> >>>> >>>> Hi Amy, >>>> >>>> Thank you for sharing the issues. Zun UI does not require >>>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>>> Flask. So I wonder why install failure occurred by Zun UI. >>>> >>>> Could you share your correction for requrements. >>>> >>>> Unfortunately, I'm in trouble on my development environment since >>>> yesterday. So I can not investigate the issues quickly. >>>> I added Hongbin to this topic, he would help us. >>>> >>>> Best regards, >>>> Shu Muto >>>> >>>> 2018年7月13日(金) 9:29 Amy Marrich : >>>> >>>>> Hi, >>>>> >>>>> I was given your email on the #openstack-zun channel as a source for >>>>> questions for the UI. I've found a few issues installing the Master branch >>>>> on devstack and not sure if they should be bugged. >>>>> >>>>> kuryr-libnetwork has incorrect versions for Flask in both >>>>> lower-constraints.txt and requirements.txt, this only affects installation >>>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>>> confirmed as an issue. >>>>> >>>>> Once correcting the requirements locally to complete the devstack >>>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>>> client. I'm also unable to create a container within Horizon. The same >>>>> credentials work fine for other OpenStack commands. >>>>> >>>>> On another server without the ui enabled I can use both the CLI and >>>>> client no issues. I'm not sure if there's something missing on >>>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html or >>>>> some other underlying issue. >>>>> >>>>> Any help or thoughts appreciated! >>>>> >>>>> Thanks, >>>>> >>>>> Amy (spotz) >>>>> >>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Sat Jul 14 21:25:19 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Sun, 15 Jul 2018 00:25:19 +0300 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection Message-ID: Hi, I'm trying to deploy redhat OSP13 but I get the following error. (undercloud) [root at staging-director stack]# ./templates/deploy.sh Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 3ba53aa3-56c5-4024-8d62-bafad967f7c2 Waiting for messages on queue 'tripleo' with no timeout. Removing the current plan files Uploading new plan files Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: ff359b14-78d7-4b64-8b09-6ec3c4697d71 Plan updated. Processing templates in the directory /tmp/tripleoclient-ae4yIf/tripleo-heat-templates Unable to establish connection to https://192.168.50.30:13989/v2/action_executions: ('Connection aborted.', BadStatusLine("''",)) (undercloud) [root at staging-director stack]# Couldn't find any info in the logs of what causes the error. Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Sat Jul 14 21:45:39 2018 From: remo at rm.ht (Remo Mattei) Date: Sat, 14 Jul 2018 14:45:39 -0700 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: References: Message-ID: It is a bad line in one of your yaml file. I would check them. Sent from my iPad > On Jul 14, 2018, at 2:25 PM, Samuel Monderer wrote: > > Hi, > > I'm trying to deploy redhat OSP13 but I get the following error. > (undercloud) [root at staging-director stack]# ./templates/deploy.sh > Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 3ba53aa3-56c5-4024-8d62-bafad967f7c2 > Waiting for messages on queue 'tripleo' with no timeout. > Removing the current plan files > Uploading new plan files > Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: ff359b14-78d7-4b64-8b09-6ec3c4697d71 > Plan updated. > Processing templates in the directory /tmp/tripleoclient-ae4yIf/tripleo-heat-templates > Unable to establish connection to https://192.168.50.30:13989/v2/action_executions: ('Connection aborted.', BadStatusLine("''",)) > (undercloud) [root at staging-director stack]# > > Couldn't find any info in the logs of what causes the error. > > Samuel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sun Jul 15 02:42:38 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 14 Jul 2018 22:42:38 -0400 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hi Amy, Today, I created a fresh VM with Ubuntu16.04 and run ./stack.sh with your local.conf, but I couldn't reproduce the two issues you mentioned (the Flask version conflict issue and 401 issue). By analyzing the logs you provided, it seems some python packages in your machine are pretty old. First, could you paste me the output of "pip freeze". Second, if possible, I would suggest to remove all the python packages and re-stack again as following: * Run ./unstack * Run ./clean.sh * Run pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y * Run ./stack Please let us know if above steps still don't work. Best regards, Hongbin On Sat, Jul 14, 2018 at 5:15 PM Amy Marrich wrote: > Hongbin, > > This was a fresh install from master this week > > commit 6312db47e9141acd33142ae857bdeeb92c59994e > > Merge: ef35713 2742875 > > Author: Zuul > > Date: Wed Jul 11 20:36:12 2018 +0000 > > > Merge "Cleanup keystone's removed config options" > > Except for builds with my patching kuryr-libnetwork locally builds have > been done with reclone and fresh /opt/stack directories. Patch has been > submitted for the Flask issue > > https://review.openstack.org/582634 but hasn't passed the gates yet. > > > Following the instructions above on a new pull of devstack: > > commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0 > > Author: OpenStack Proposal Bot > > Date: Thu Jul 12 06:17:32 2018 +0000 > > Updated from generate-devstack-plugins-list > > Change-Id: I8f702373c76953a0a29285f410d368c975ba4024 > > > I'm still able to use the openstack CLI for non-Zun commands but 401 on Zun > > root at zunui:~# openstack service list > > +----------------------------------+------------------+------------------+ > > | ID | Name | Type | > > +----------------------------------+------------------+------------------+ > > | 06be414af2fd4d59af8de0ccff78149e | placement | placement | > > | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute | > > | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation | > > | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder | block-storage | > > | 6e06ac1394ee4872aa134081d190f18e | neutron | network | > > | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork | kuryr-libnetwork | > > | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3 | > > | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy | compute_legacy | > > | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2 | > > | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity | > > | d9ea196e9cae4b0691f6c4b619eb47c9 | zun | container | > > | e528282e291f4ddbaaac6d6c82a0036e | cinder | volume | > > | e6078b2c01184f88a784b390f0b28263 | glance | image | > > | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration | > > +----------------------------------+------------------+------------------+ > > root at zunui:~# openstack appcontainer list > > Unauthorized (HTTP 401) (Request-ID: > req-e44f5caf-642c-4435-ab1d-98feae1fada9) > > root at zunui:~# zun list > > ERROR: Unauthorized (HTTP 401) (Request-ID: > req-587e39d6-463f-4921-b45b-29576a00c242) > > > Thanks, > > > Amy (spotz) > > > > On Fri, Jul 13, 2018 at 10:34 PM, Hongbin Lu wrote: > >> Hi Amy, >> >> First, I want to confirm which version of devstack you were using? (go to >> the devstack folder and type "git log -1"). >> >> If possible, I would suggest to do the following steps: >> >> * Run ./unstack >> * Run ./clean >> * Pull down the latest version of devstack (if it is too old) >> * Pull down the latest version of all the projects under /opt/stack/ >> * Run ./stack >> >> If above steps couldn't resolve the problem, please let me know. >> >> Best regards, >> Hongbin >> >> >> On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: >> >>> Hongbin, >>> >>> Let me know if you still want me to mail the dev list, but here are the >>> gists for the installations and the broken CLI I mentioned >>> >>> local.conf - which is basically the developer quickstart instructions >>> for Zun >>> >>> https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea >>> >>> >>> This is the failure with a fresh devstack installation >>> >>> https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 >>> >>> >>> Requirements repo change a few weeks ago >>> >>> >>> http://git.openstack.org/cgit/openstack/requirements/commit/?id=cb6c00c01f82537a38bd0c5a560183735cefe2f9 >>> >>> >>> Changed local Flask version for curry-libnetwork and set local.conf to >>> reclone=no and then installed and tried to use the CLI. >>> >>> https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 >>> >>> >>> It makes sense that Flask would cause an issue on the UI installation >>> even though it's enabled even for a non-enabled build according to the >>> quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring >>> it up to the current requirements. I don't however know where to start >>> troubleshooting the 401 issue. On a different machine I have decstack with >>> Zun but no zun-ui and the CLI responds correctly. >>> >>> >>> Thanks, >>> >>> Amy (spotz) >>> >>> >>> On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu >>> wrote: >>> >>>> Hi Amy, >>>> >>>> I am also in doubts about the Flask version issue. Perhaps you can >>>> provide more details about this issue? Do you see any error message? >>>> >>>> Best regards, >>>> Hongbin >>>> >>>> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >>>> >>>>> >>>>> Hi Amy, >>>>> >>>>> Thank you for sharing the issues. Zun UI does not require >>>>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>>>> Flask. So I wonder why install failure occurred by Zun UI. >>>>> >>>>> Could you share your correction for requrements. >>>>> >>>>> Unfortunately, I'm in trouble on my development environment since >>>>> yesterday. So I can not investigate the issues quickly. >>>>> I added Hongbin to this topic, he would help us. >>>>> >>>>> Best regards, >>>>> Shu Muto >>>>> >>>>> 2018年7月13日(金) 9:29 Amy Marrich : >>>>> >>>>>> Hi, >>>>>> >>>>>> I was given your email on the #openstack-zun channel as a source for >>>>>> questions for the UI. I've found a few issues installing the Master branch >>>>>> on devstack and not sure if they should be bugged. >>>>>> >>>>>> kuryr-libnetwork has incorrect versions for Flask in both >>>>>> lower-constraints.txt and requirements.txt, this only affects installation >>>>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>>>> confirmed as an issue. >>>>>> >>>>>> Once correcting the requirements locally to complete the devstack >>>>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>>>> client. I'm also unable to create a container within Horizon. The same >>>>>> credentials work fine for other OpenStack commands. >>>>>> >>>>>> On another server without the ui enabled I can use both the CLI and >>>>>> client no issues. I'm not sure if there's something missing on >>>>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html or >>>>>> some other underlying issue. >>>>>> >>>>>> Any help or thoughts appreciated! >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Amy (spotz) >>>>>> >>>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Sun Jul 15 14:16:45 2018 From: amy at demarco.com (Amy Marrich) Date: Sun, 15 Jul 2018 09:16:45 -0500 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hongbin, Doing the pip uninstall did the trick with the Flask version, when running another debug I did notice an incorrect IP for the Keystone URI and have restarted the machines networking and cleaned up the /etc/hosts. When doing a second stack, I did need to uninstall the pip packages again for the second stack.sh to complete, might be worth adding this to the docs as a note if people have issues. Second install still had the wrong IP showing as the Keystone URI, I'll try s fresh machine install next. Thanks for all your help! Amy (spotz) On Sat, Jul 14, 2018 at 9:42 PM, Hongbin Lu wrote: > Hi Amy, > > Today, I created a fresh VM with Ubuntu16.04 and run ./stack.sh with your > local.conf, but I couldn't reproduce the two issues you mentioned (the > Flask version conflict issue and 401 issue). By analyzing the logs you > provided, it seems some python packages in your machine are pretty old. > First, could you paste me the output of "pip freeze". Second, if possible, > I would suggest to remove all the python packages and re-stack again as > following: > > * Run ./unstack > * Run ./clean.sh > * Run pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y > * Run ./stack > > Please let us know if above steps still don't work. > > Best regards, > Hongbin > > On Sat, Jul 14, 2018 at 5:15 PM Amy Marrich wrote: > >> Hongbin, >> >> This was a fresh install from master this week >> >> commit 6312db47e9141acd33142ae857bdeeb92c59994e >> >> Merge: ef35713 2742875 >> >> Author: Zuul >> >> Date: Wed Jul 11 20:36:12 2018 +0000 >> >> >> Merge "Cleanup keystone's removed config options" >> >> Except for builds with my patching kuryr-libnetwork locally builds have >> been done with reclone and fresh /opt/stack directories. Patch has been >> submitted for the Flask issue >> >> https://review.openstack.org/582634 but hasn't passed the gates yet. >> >> >> Following the instructions above on a new pull of devstack: >> >> commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0 >> >> Author: OpenStack Proposal Bot >> >> Date: Thu Jul 12 06:17:32 2018 +0000 >> >> Updated from generate-devstack-plugins-list >> >> Change-Id: I8f702373c76953a0a29285f410d368c975ba4024 >> >> >> I'm still able to use the openstack CLI for non-Zun commands but 401 on >> Zun >> >> root at zunui:~# openstack service list >> >> +----------------------------------+------------------+----- >> -------------+ >> >> | ID | Name | Type >> | >> >> +----------------------------------+------------------+----- >> -------------+ >> >> | 06be414af2fd4d59af8de0ccff78149e | placement | placement >> | >> >> | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute >> | >> >> | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation >> | >> >> | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder | block-storage >> | >> >> | 6e06ac1394ee4872aa134081d190f18e | neutron | network >> | >> >> | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork | kuryr-libnetwork >> | >> >> | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3 >> | >> >> | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy | compute_legacy >> | >> >> | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2 >> | >> >> | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity >> | >> >> | d9ea196e9cae4b0691f6c4b619eb47c9 | zun | container >> | >> >> | e528282e291f4ddbaaac6d6c82a0036e | cinder | volume >> | >> >> | e6078b2c01184f88a784b390f0b28263 | glance | image >> | >> >> | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration >> | >> >> +----------------------------------+------------------+----- >> -------------+ >> >> root at zunui:~# openstack appcontainer list >> >> Unauthorized (HTTP 401) (Request-ID: req-e44f5caf-642c-4435-ab1d- >> 98feae1fada9) >> >> root at zunui:~# zun list >> >> ERROR: Unauthorized (HTTP 401) (Request-ID: req-587e39d6-463f-4921-b45b- >> 29576a00c242) >> >> >> Thanks, >> >> >> Amy (spotz) >> >> >> >> On Fri, Jul 13, 2018 at 10:34 PM, Hongbin Lu >> wrote: >> >>> Hi Amy, >>> >>> First, I want to confirm which version of devstack you were using? (go >>> to the devstack folder and type "git log -1"). >>> >>> If possible, I would suggest to do the following steps: >>> >>> * Run ./unstack >>> * Run ./clean >>> * Pull down the latest version of devstack (if it is too old) >>> * Pull down the latest version of all the projects under /opt/stack/ >>> * Run ./stack >>> >>> If above steps couldn't resolve the problem, please let me know. >>> >>> Best regards, >>> Hongbin >>> >>> >>> On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: >>> >>>> Hongbin, >>>> >>>> Let me know if you still want me to mail the dev list, but here are the >>>> gists for the installations and the broken CLI I mentioned >>>> >>>> local.conf - which is basically the developer quickstart instructions >>>> for Zun >>>> >>>> https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea >>>> >>>> >>>> This is the failure with a fresh devstack installation >>>> >>>> https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 >>>> >>>> >>>> Requirements repo change a few weeks ago >>>> >>>> http://git.openstack.org/cgit/openstack/requirements/commit/?id= >>>> cb6c00c01f82537a38bd0c5a560183735cefe2f9 >>>> >>>> >>>> Changed local Flask version for curry-libnetwork and set local.conf to >>>> reclone=no and then installed and tried to use the CLI. >>>> >>>> https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 >>>> >>>> >>>> It makes sense that Flask would cause an issue on the UI installation >>>> even though it's enabled even for a non-enabled build according to the >>>> quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring >>>> it up to the current requirements. I don't however know where to start >>>> troubleshooting the 401 issue. On a different machine I have decstack with >>>> Zun but no zun-ui and the CLI responds correctly. >>>> >>>> >>>> Thanks, >>>> >>>> Amy (spotz) >>>> >>>> >>>> On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu >>>> wrote: >>>> >>>>> Hi Amy, >>>>> >>>>> I am also in doubts about the Flask version issue. Perhaps you can >>>>> provide more details about this issue? Do you see any error message? >>>>> >>>>> Best regards, >>>>> Hongbin >>>>> >>>>> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >>>>> >>>>>> >>>>>> Hi Amy, >>>>>> >>>>>> Thank you for sharing the issues. Zun UI does not require >>>>>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>>>>> Flask. So I wonder why install failure occurred by Zun UI. >>>>>> >>>>>> Could you share your correction for requrements. >>>>>> >>>>>> Unfortunately, I'm in trouble on my development environment since >>>>>> yesterday. So I can not investigate the issues quickly. >>>>>> I added Hongbin to this topic, he would help us. >>>>>> >>>>>> Best regards, >>>>>> Shu Muto >>>>>> >>>>>> 2018年7月13日(金) 9:29 Amy Marrich : >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I was given your email on the #openstack-zun channel as a source for >>>>>>> questions for the UI. I've found a few issues installing the Master branch >>>>>>> on devstack and not sure if they should be bugged. >>>>>>> >>>>>>> kuryr-libnetwork has incorrect versions for Flask in both >>>>>>> lower-constraints.txt and requirements.txt, this only affects installation >>>>>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>>>>> confirmed as an issue. >>>>>>> >>>>>>> Once correcting the requirements locally to complete the devstack >>>>>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>>>>> client. I'm also unable to create a container within Horizon. The same >>>>>>> credentials work fine for other OpenStack commands. >>>>>>> >>>>>>> On another server without the ui enabled I can use both the CLI and >>>>>>> client no issues. I'm not sure if there's something missing on >>>>>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html >>>>>>> or some other underlying issue. >>>>>>> >>>>>>> Any help or thoughts appreciated! >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Amy (spotz) >>>>>>> >>>>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Sun Jul 15 15:02:42 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Sun, 15 Jul 2018 18:02:42 +0300 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: References: Message-ID: It seems that the problem is in my roles_data.yaml file but I don't see what is the problem I've attached the file. On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei wrote: > It is a bad line in one of your yaml file. I would check them. > > Sent from my iPad > > On Jul 14, 2018, at 2:25 PM, Samuel Monderer > wrote: > > Hi, > > I'm trying to deploy redhat OSP13 but I get the following error. > (undercloud) [root at staging-director stack]# ./templates/deploy.sh > Started Mistral Workflow > tripleo.validations.v1.check_pre_deployment_validations. Execution ID: > 3ba53aa3-56c5-4024-8d62-bafad967f7c2 > Waiting for messages on queue 'tripleo' with no timeout. > Removing the current plan files > Uploading new plan files > Started Mistral Workflow > tripleo.plan_management.v1.update_deployment_plan. Execution ID: > ff359b14-78d7-4b64-8b09-6ec3c4697d71 > Plan updated. > Processing templates in the directory > /tmp/tripleoclient-ae4yIf/tripleo-heat-templates > Unable to establish connection to > https://192.168.50.30:13989/v2/action_executions: ('Connection aborted.', > BadStatusLine("''",)) > (undercloud) [root at staging-director stack]# > > Couldn't find any info in the logs of what causes the error. > > Samuel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: roles_data.yaml Type: application/x-yaml Size: 44319 bytes Desc: not available URL: From hongbin034 at gmail.com Sun Jul 15 15:49:53 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 15 Jul 2018 11:49:53 -0400 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hi Amy, The wrong Keystone URI might be due to the an issue of the devstack plugins. I have proposed fixes [1] [2] for that. Thanks for the suggestion about adding a note for uninstalling pip packages. I have created a ticket [3] for that. [1] https://review.openstack.org/#/c/582799/ [2] https://review.openstack.org/#/c/582800/ [3] https://bugs.launchpad.net/zun/+bug/1781807 Best regards, Hongbin On Sun, Jul 15, 2018 at 10:16 AM Amy Marrich wrote: > Hongbin, > > Doing the pip uninstall did the trick with the Flask version, when running > another debug I did notice an incorrect IP for the Keystone URI and have > restarted the machines networking and cleaned up the /etc/hosts. > > When doing a second stack, I did need to uninstall the pip packages again > for the second stack.sh to complete, might be worth adding this to the docs > as a note if people have issues. Second install still had the wrong IP > showing as the Keystone URI, I'll try s fresh machine install next. > > Thanks for all your help! > > Amy (spotz) > > On Sat, Jul 14, 2018 at 9:42 PM, Hongbin Lu wrote: > >> Hi Amy, >> >> Today, I created a fresh VM with Ubuntu16.04 and run ./stack.sh with your >> local.conf, but I couldn't reproduce the two issues you mentioned (the >> Flask version conflict issue and 401 issue). By analyzing the logs you >> provided, it seems some python packages in your machine are pretty old. >> First, could you paste me the output of "pip freeze". Second, if possible, >> I would suggest to remove all the python packages and re-stack again as >> following: >> >> * Run ./unstack >> * Run ./clean.sh >> * Run pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y >> * Run ./stack >> >> Please let us know if above steps still don't work. >> >> Best regards, >> Hongbin >> >> On Sat, Jul 14, 2018 at 5:15 PM Amy Marrich wrote: >> >>> Hongbin, >>> >>> This was a fresh install from master this week >>> >>> commit 6312db47e9141acd33142ae857bdeeb92c59994e >>> >>> Merge: ef35713 2742875 >>> >>> Author: Zuul >>> >>> Date: Wed Jul 11 20:36:12 2018 +0000 >>> >>> >>> Merge "Cleanup keystone's removed config options" >>> >>> Except for builds with my patching kuryr-libnetwork locally builds have >>> been done with reclone and fresh /opt/stack directories. Patch has been >>> submitted for the Flask issue >>> >>> https://review.openstack.org/582634 but hasn't passed the gates yet. >>> >>> >>> Following the instructions above on a new pull of devstack: >>> >>> commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0 >>> >>> Author: OpenStack Proposal Bot >>> >>> Date: Thu Jul 12 06:17:32 2018 +0000 >>> >>> Updated from generate-devstack-plugins-list >>> >>> Change-Id: I8f702373c76953a0a29285f410d368c975ba4024 >>> >>> >>> I'm still able to use the openstack CLI for non-Zun commands but 401 on >>> Zun >>> >>> root at zunui:~# openstack service list >>> >>> >>> +----------------------------------+------------------+------------------+ >>> >>> | ID | Name | Type >>> | >>> >>> >>> +----------------------------------+------------------+------------------+ >>> >>> | 06be414af2fd4d59af8de0ccff78149e | placement | placement >>> | >>> >>> | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute >>> | >>> >>> | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation >>> | >>> >>> | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder | block-storage >>> | >>> >>> | 6e06ac1394ee4872aa134081d190f18e | neutron | network >>> | >>> >>> | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork | kuryr-libnetwork >>> | >>> >>> | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3 >>> | >>> >>> | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy | compute_legacy >>> | >>> >>> | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2 >>> | >>> >>> | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity >>> | >>> >>> | d9ea196e9cae4b0691f6c4b619eb47c9 | zun | container >>> | >>> >>> | e528282e291f4ddbaaac6d6c82a0036e | cinder | volume >>> | >>> >>> | e6078b2c01184f88a784b390f0b28263 | glance | image >>> | >>> >>> | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration >>> | >>> >>> >>> +----------------------------------+------------------+------------------+ >>> >>> root at zunui:~# openstack appcontainer list >>> >>> Unauthorized (HTTP 401) (Request-ID: >>> req-e44f5caf-642c-4435-ab1d-98feae1fada9) >>> >>> root at zunui:~# zun list >>> >>> ERROR: Unauthorized (HTTP 401) (Request-ID: >>> req-587e39d6-463f-4921-b45b-29576a00c242) >>> >>> >>> Thanks, >>> >>> >>> Amy (spotz) >>> >>> >>> >>> On Fri, Jul 13, 2018 at 10:34 PM, Hongbin Lu >>> wrote: >>> >>>> Hi Amy, >>>> >>>> First, I want to confirm which version of devstack you were using? (go >>>> to the devstack folder and type "git log -1"). >>>> >>>> If possible, I would suggest to do the following steps: >>>> >>>> * Run ./unstack >>>> * Run ./clean >>>> * Pull down the latest version of devstack (if it is too old) >>>> * Pull down the latest version of all the projects under /opt/stack/ >>>> * Run ./stack >>>> >>>> If above steps couldn't resolve the problem, please let me know. >>>> >>>> Best regards, >>>> Hongbin >>>> >>>> >>>> On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: >>>> >>>>> Hongbin, >>>>> >>>>> Let me know if you still want me to mail the dev list, but here are >>>>> the gists for the installations and the broken CLI I mentioned >>>>> >>>>> local.conf - which is basically the developer quickstart instructions >>>>> for Zun >>>>> >>>>> https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea >>>>> >>>>> >>>>> This is the failure with a fresh devstack installation >>>>> >>>>> https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 >>>>> >>>>> >>>>> Requirements repo change a few weeks ago >>>>> >>>>> >>>>> http://git.openstack.org/cgit/openstack/requirements/commit/?id=cb6c00c01f82537a38bd0c5a560183735cefe2f9 >>>>> >>>>> >>>>> Changed local Flask version for curry-libnetwork and set local.conf to >>>>> reclone=no and then installed and tried to use the CLI. >>>>> >>>>> https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 >>>>> >>>>> >>>>> It makes sense that Flask would cause an issue on the UI installation >>>>> even though it's enabled even for a non-enabled build according to the >>>>> quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring >>>>> it up to the current requirements. I don't however know where to start >>>>> troubleshooting the 401 issue. On a different machine I have decstack with >>>>> Zun but no zun-ui and the CLI responds correctly. >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> Amy (spotz) >>>>> >>>>> >>>>> On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu >>>>> wrote: >>>>> >>>>>> Hi Amy, >>>>>> >>>>>> I am also in doubts about the Flask version issue. Perhaps you can >>>>>> provide more details about this issue? Do you see any error message? >>>>>> >>>>>> Best regards, >>>>>> Hongbin >>>>>> >>>>>> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >>>>>> >>>>>>> >>>>>>> Hi Amy, >>>>>>> >>>>>>> Thank you for sharing the issues. Zun UI does not require >>>>>>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>>>>>> Flask. So I wonder why install failure occurred by Zun UI. >>>>>>> >>>>>>> Could you share your correction for requrements. >>>>>>> >>>>>>> Unfortunately, I'm in trouble on my development environment since >>>>>>> yesterday. So I can not investigate the issues quickly. >>>>>>> I added Hongbin to this topic, he would help us. >>>>>>> >>>>>>> Best regards, >>>>>>> Shu Muto >>>>>>> >>>>>>> 2018年7月13日(金) 9:29 Amy Marrich : >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I was given your email on the #openstack-zun channel as a source >>>>>>>> for questions for the UI. I've found a few issues installing the Master >>>>>>>> branch on devstack and not sure if they should be bugged. >>>>>>>> >>>>>>>> kuryr-libnetwork has incorrect versions for Flask in both >>>>>>>> lower-constraints.txt and requirements.txt, this only affects installation >>>>>>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>>>>>> confirmed as an issue. >>>>>>>> >>>>>>>> Once correcting the requirements locally to complete the devstack >>>>>>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>>>>>> client. I'm also unable to create a container within Horizon. The same >>>>>>>> credentials work fine for other OpenStack commands. >>>>>>>> >>>>>>>> On another server without the ui enabled I can use both the CLI and >>>>>>>> client no issues. I'm not sure if there's something missing on >>>>>>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html >>>>>>>> or some other underlying issue. >>>>>>>> >>>>>>>> Any help or thoughts appreciated! >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Amy (spotz) >>>>>>>> >>>>>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Sun Jul 15 15:57:22 2018 From: remo at rm.ht (Remo Mattei) Date: Sun, 15 Jul 2018 08:57:22 -0700 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: References: Message-ID: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> Here is the one I use > On Jul 15, 2018, at 8:02 AM, Samuel Monderer wrote: > > It seems that the problem is in my roles_data.yaml file but I don't see what is the problem > I've attached the file. > > On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei > wrote: > It is a bad line in one of your yaml file. I would check them. > > Sent from my iPad > > On Jul 14, 2018, at 2:25 PM, Samuel Monderer > wrote: > >> Hi, >> >> I'm trying to deploy redhat OSP13 but I get the following error. >> (undercloud) [root at staging-director stack]# ./templates/deploy.sh >> Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 3ba53aa3-56c5-4024-8d62-bafad967f7c2 >> Waiting for messages on queue 'tripleo' with no timeout. >> Removing the current plan files >> Uploading new plan files >> Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: ff359b14-78d7-4b64-8b09-6ec3c4697d71 >> Plan updated. >> Processing templates in the directory /tmp/tripleoclient-ae4yIf/tripleo-heat-templates >> Unable to establish connection to https://192.168.50.30:13989/v2/action_executions : ('Connection aborted.', BadStatusLine("''",)) >> (undercloud) [root at staging-director stack]# >> >> Couldn't find any info in the logs of what causes the error. >> >> Samuel > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: roles_data.yaml Type: application/octet-stream Size: 13046 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Sun Jul 15 17:07:52 2018 From: remo at rm.ht (Remo Mattei) Date: Sun, 15 Jul 2018 10:07:52 -0700 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> References: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> Message-ID: <35724F88-1BEB-4587-A6E7-AE07A2C648FC@rm.ht> I still think there is something wrong with some of your yaml, the roles_data is elaborating based on what your yaml files are. Can you share your deployment script did you make any of the yaml files yourself? Remo > On Jul 15, 2018, at 8:57 AM, Remo Mattei wrote: > > Here is the one I use > > > > >> On Jul 15, 2018, at 8:02 AM, Samuel Monderer > wrote: >> >> It seems that the problem is in my roles_data.yaml file but I don't see what is the problem >> I've attached the file. >> >> On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei > wrote: >> It is a bad line in one of your yaml file. I would check them. >> >> Sent from my iPad >> >> On Jul 14, 2018, at 2:25 PM, Samuel Monderer > wrote: >> >>> Hi, >>> >>> I'm trying to deploy redhat OSP13 but I get the following error. >>> (undercloud) [root at staging-director stack]# ./templates/deploy.sh >>> Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: 3ba53aa3-56c5-4024-8d62-bafad967f7c2 >>> Waiting for messages on queue 'tripleo' with no timeout. >>> Removing the current plan files >>> Uploading new plan files >>> Started Mistral Workflow tripleo.plan_management.v1.update_deployment_plan. Execution ID: ff359b14-78d7-4b64-8b09-6ec3c4697d71 >>> Plan updated. >>> Processing templates in the directory /tmp/tripleoclient-ae4yIf/tripleo-heat-templates >>> Unable to establish connection to https://192.168.50.30:13989/v2/action_executions : ('Connection aborted.', BadStatusLine("''",)) >>> (undercloud) [root at staging-director stack]# >>> >>> Couldn't find any info in the logs of what causes the error. >>> >>> Samuel >> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Sun Jul 15 18:50:24 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Sun, 15 Jul 2018 21:50:24 +0300 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: <35724F88-1BEB-4587-A6E7-AE07A2C648FC@rm.ht> References: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> <35724F88-1BEB-4587-A6E7-AE07A2C648FC@rm.ht> Message-ID: Hi Remo, Attached are templates I used for the deployment. They are based on a deployment we did with OSP11. I made the changes for it to work with OSP13. I do think it's the roles_data.yaml file that is causing the error because if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the deployment script the deployment passes the point it was failing before but fails much later because of the missing definition of the role. Samuel On Sun, Jul 15, 2018 at 8:35 PM Remo Mattei wrote: > I still think there is something wrong with some of your yaml, the > roles_data is elaborating based on what your yaml files are. Can you share > your deployment script did you make any of the yaml files yourself? > > Remo > > On Jul 15, 2018, at 8:57 AM, Remo Mattei wrote: > > Here is the one I use > > > > > > On Jul 15, 2018, at 8:02 AM, Samuel Monderer > wrote: > > It seems that the problem is in my roles_data.yaml file but I don't see > what is the problem > I've attached the file. > > On Sun, Jul 15, 2018 at 12:46 AM Remo Mattei wrote: > >> It is a bad line in one of your yaml file. I would check them. >> >> Sent from my iPad >> >> On Jul 14, 2018, at 2:25 PM, Samuel Monderer < >> smonderer at vasonanetworks.com> wrote: >> >> Hi, >> >> I'm trying to deploy redhat OSP13 but I get the following error. >> (undercloud) [root at staging-director stack]# ./templates/deploy.sh >> Started Mistral Workflow >> tripleo.validations.v1.check_pre_deployment_validations. Execution ID: >> 3ba53aa3-56c5-4024-8d62-bafad967f7c2 >> Waiting for messages on queue 'tripleo' with no timeout. >> Removing the current plan files >> Uploading new plan files >> Started Mistral Workflow >> tripleo.plan_management.v1.update_deployment_plan. Execution ID: >> ff359b14-78d7-4b64-8b09-6ec3c4697d71 >> Plan updated. >> Processing templates in the directory >> /tmp/tripleoclient-ae4yIf/tripleo-heat-templates >> Unable to establish connection to >> https://192.168.50.30:13989/v2/action_executions: ('Connection >> aborted.', BadStatusLine("''",)) >> (undercloud) [root at staging-director stack]# >> >> Couldn't find any info in the logs of what causes the error. >> >> Samuel >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org >> ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sun Jul 15 20:37:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 15 Jul 2018 16:37:01 -0400 Subject: [openstack-dev] [tripleo] Plan to switch the undercloud to be containerized by default In-Reply-To: References: Message-ID: On Tue, Jul 10, 2018, 7:57 PM Emilien Macchi, wrote: > with [tripleo] tag... > > On Tue, Jul 10, 2018 at 7:56 PM Emilien Macchi wrote: > >> This is an update on where things are regarding $topic, based on feedback >> I've got from the work done recently: >> >> 1) Switch --use-heat to take a boolean and deprecate it >> >> We still want to allow users to deploy non containerized underclouds, so >> we made this patch so they can use --use-heat=False: >> https://review.openstack.org/#/c/581467/ >> Also https://review.openstack.org/#/c/581468 and >> https://review.openstack.org/581180 as dependencies >> >> 2) Configure CI jobs for containerized undercloud, except scenario001, >> 002 for timeout reasons (and figure out this problem in a parallel effort) >> >> https://review.openstack.org/#/c/575330 >> https://review.openstack.org/#/c/579755 >> >> 3) Switch tripleoclient to deploy by default a containerized undercloud >> >> https://review.openstack.org/576218 >> > It merged today, hopefully all CI jobs (including promotion) will continue to run smoothly. Thanks everyone involved in this big effort! >> 4) Improve performances in general so scenario001/002 doesn't timeout >> when containerized undercloud is enabled >> >> https://review.openstack.org/#/c/581183 is the patch that'll enable the >> containerized undercloud >> https://review.openstack.org/#/c/577889/ is a patch that enables >> pipelining in ansible/quickstart, but more is about to come, I'll update >> the patches tonight. >> > These scenarios are still on the edge of a timeout, we'll keep working on those. >> 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to >> disable containers, and keep coverage for non containerized undercloud) >> >> https://review.openstack.org/#/c/581534/ >> >> >> Reviews are welcome, we aim to merge this work by milestone 3, in less >> than 2 weeks from now. >> > Since we merged the majority of the work, I think we can close the blueprint and not require FFE. Any feedback on that statement is welcome. Thanks, Emilien > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Sun Jul 15 21:47:33 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 15 Jul 2018 15:47:33 -0600 Subject: [openstack-dev] [tripleo] Plan to switch the undercloud to be containerized by default In-Reply-To: References: Message-ID: On Sun, Jul 15, 2018 at 4:38 PM Emilien Macchi wrote: > On Tue, Jul 10, 2018, 7:57 PM Emilien Macchi, wrote: > >> with [tripleo] tag... >> >> On Tue, Jul 10, 2018 at 7:56 PM Emilien Macchi >> wrote: >> >>> This is an update on where things are regarding $topic, based on >>> feedback I've got from the work done recently: >>> >>> 1) Switch --use-heat to take a boolean and deprecate it >>> >>> We still want to allow users to deploy non containerized underclouds, so >>> we made this patch so they can use --use-heat=False: >>> https://review.openstack.org/#/c/581467/ >>> Also https://review.openstack.org/#/c/581468 and >>> https://review.openstack.org/581180 as dependencies >>> >>> 2) Configure CI jobs for containerized undercloud, except scenario001, >>> 002 for timeout reasons (and figure out this problem in a parallel effort) >>> >>> https://review.openstack.org/#/c/575330 >>> https://review.openstack.org/#/c/579755 >>> >>> 3) Switch tripleoclient to deploy by default a containerized undercloud >>> >>> https://review.openstack.org/576218 >>> >> > It merged today, hopefully all CI jobs (including promotion) will continue > to run smoothly. Thanks everyone involved in this big effort! > > >>> 4) Improve performances in general so scenario001/002 doesn't timeout >>> when containerized undercloud is enabled >>> >>> https://review.openstack.org/#/c/581183 is the patch that'll enable the >>> containerized undercloud >>> https://review.openstack.org/#/c/577889/ is a patch that enables >>> pipelining in ansible/quickstart, but more is about to come, I'll update >>> the patches tonight. >>> >> > These scenarios are still on the edge of a timeout, we'll keep working on > those. > > >>> 5) Cleanup quickstart to stop using use-heat except for fs003 (needed to >>> disable containers, and keep coverage for non containerized undercloud) >>> >>> https://review.openstack.org/#/c/581534/ >>> >>> >>> Reviews are welcome, we aim to merge this work by milestone 3, in less >>> than 2 weeks from now. >>> >> > Since we merged the majority of the work, I think we can close the > blueprint and not require FFE. > Any feedback on that statement is welcome. > > Thanks, > Emilien > Nice work Emiliien!! Thanks > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Jul 16 01:58:54 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 16 Jul 2018 11:58:54 +1000 Subject: [openstack-dev] [tripleo] Rocky blueprints In-Reply-To: <20180713001204.GI22285@thor.bakeyournoodle.com> References: <20180713001204.GI22285@thor.bakeyournoodle.com> Message-ID: <20180716015854.GA27474@thor.bakeyournoodle.com> On Fri, Jul 13, 2018 at 10:12:04AM +1000, Tony Breeds wrote: > On Wed, Jul 11, 2018 at 10:39:30AM -0600, Alex Schultz wrote: > > Currently open with pending patches (may need FFE): > > - https://blueprints.launchpad.net/tripleo/+spec/multiarch-support > > I'd like an FFE for this, the open reviews are in pretty good shape and > mostly merged. (or +W'd). > > We'll need another tripleo-common release after > https://review.openstack.org/537768 merges which I'd really like to do > next week if possible. Upon reflection I've -W'd some of the changes for this blueprint until Stein. The 5 left are: https://review.openstack.org/#/q/topic:bp/multiarch-support+is:open+label:Workflow%253E-1 Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From huangfuzeyi at gmail.com Mon Jul 16 02:02:23 2018 From: huangfuzeyi at gmail.com (Enoch Huangfu) Date: Mon, 16 Jul 2018 10:02:23 +0800 Subject: [openstack-dev] Need help on this neutron-server start error with vmware_nsx plugin enable In-Reply-To: References: Message-ID: Hi Tong, after change vmware_nsx to vmware_nsxv, still can't load, here are the logs: 2018-07-16 09:56:35.889 10814 DEBUG oslo_concurrency.lockutils [-] Lock "plugin-directory" acquired by "neutron_lib.plugins.directory._create_plu gin_directory" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 2018-07-16 09:56:35.889 10814 DEBUG oslo_concurrency.lockutils [-] Lock "plugin-directory" released by "neutron_lib.plugins.directory._create_plu gin_directory" :: held 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 2018-07-16 09:56:35.890 10814 DEBUG oslo_concurrency.lockutils [-] Lock "manager" acquired by "neutron.manager._create_instance" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273 2018-07-16 09:56:35.890 10814 INFO neutron.manager [-] Loading core plugin: vmware_nsxv.plugin.NsxDvsPlugin 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime [-] Error loading class by alias: NoMatches: No 'neutron.core_plugins' driver found , looking for 'vmware_nsxv.plugin.NsxDvsPlugin' 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime Traceback (most recent call last): 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 46, in load_class_by_alias_or_classname 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime namespace, name, warn_on_missing_entrypoint=False) 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 61, in __init__ 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime warn_on_missing_entrypoint=warn_on_missing_entrypoint 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 89, in __init__ 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime self._init_plugins(extensions) 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 113, in _init_plugins 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime (self.namespace, name)) 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime NoMatches: No 'neutron.core_plugins' driver found, looking for 'vmware_nsxv.plugin.NsxDvsPlugin' 2018-07-16 09:56:35.891 10814 ERROR neutron_lib.utils.runtime 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime [-] Error loading class by class name: ImportError: No module named vmware_nsxv.plugin 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime Traceback (most recent call last): 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 52, in load_class_by_alias_or_classname 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime class_to_load = importutils.import_class(name) 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime File "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 30, in import_class 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime __import__(mod_str) 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime ImportError: No module named vmware_nsxv.plugin 2018-07-16 09:56:36.038 10814 ERROR neutron_lib.utils.runtime 2018-07-16 09:56:36.039 10814 ERROR neutron.manager [-] Plugin 'vmware_nsxv.plugin.NsxDvsPlugin' not found. 2018-07-16 09:56:36.039 10814 DEBUG oslo_concurrency.lockutils [-] Lock "manager" released by "neutron.manager._create_instance" :: held 0.150s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 2018-07-16 09:56:36.040 10814 ERROR neutron.service [-] Unrecoverable error: please check log for details.: ImportError: Class not found. 2018-07-16 09:56:36.040 10814 ERROR neutron.service Traceback (most recent call last): 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 86, in serve_wsgi 2018-07-16 09:56:36.040 10814 ERROR neutron.service service.start() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 62, in start 2018-07-16 09:56:36.040 10814 ERROR neutron.service self.wsgi_app = _run_wsgi(self.app_name) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/service.py", line 289, in _run_wsgi 2018-07-16 09:56:36.040 10814 ERROR neutron.service app = config.load_paste_app(app_name) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/common/config.py", line 122, in load_paste_app 2018-07-16 09:56:36.040 10814 ERROR neutron.service app = loader.load_app(app_name) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/oslo_service/wsgi.py", line 353, in load_app 2018-07-16 09:56:36.040 10814 ERROR neutron.service return deploy.loadapp("config:%s" % self.config_path, name=name) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp 2018-07-16 09:56:36.040 10814 ERROR neutron.service return loadobj(APP, uri, name=name, **kw) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj 2018-07-16 09:56:36.040 10814 ERROR neutron.service return context.create() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2018-07-16 09:56:36.040 10814 ERROR neutron.service return self.object_type.invoke(self) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2018-07-16 09:56:36.040 10814 ERROR neutron.service **context.local_conf) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call 2018-07-16 09:56:36.040 10814 ERROR neutron.service val = callable(*args, **kw) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/urlmap.py", line 25, in urlmap_factory 2018-07-16 09:56:36.040 10814 ERROR neutron.service app = loader.get_app(app_name, global_conf=global_conf) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2018-07-16 09:56:36.040 10814 ERROR neutron.service name=name, global_conf=global_conf).create() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2018-07-16 09:56:36.040 10814 ERROR neutron.service return self.object_type.invoke(self) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke 2018-07-16 09:56:36.040 10814 ERROR neutron.service **context.local_conf) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call 2018-07-16 09:56:36.040 10814 ERROR neutron.service val = callable(*args, **kw) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/auth.py", line 47, in pipeline_factory 2018-07-16 09:56:36.040 10814 ERROR neutron.service app = loader.get_app(pipeline[-1]) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in get_app 2018-07-16 09:56:36.040 10814 ERROR neutron.service name=name, global_conf=global_conf).create() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create 2018-07-16 09:56:36.040 10814 ERROR neutron.service return self.object_type.invoke(self) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 146, in invoke 2018-07-16 09:56:36.040 10814 ERROR neutron.service return fix_call(context.object, context.global_conf, **context.local_conf) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call 2018-07-16 09:56:36.040 10814 ERROR neutron.service val = callable(*args, **kw) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/api/v2/router.py", line 25, in _factory 2018-07-16 09:56:36.040 10814 ERROR neutron.service return pecan_app.v2_factory(global_config, **local_config) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/pecan_wsgi/app.py", line 47, in v2_factory 2018-07-16 09:56:36.040 10814 ERROR neutron.service startup.initialize_all() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/pecan_wsgi/startup.py", line 39, in initialize_all 2018-07-16 09:56:36.040 10814 ERROR neutron.service manager.init() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 296, in init 2018-07-16 09:56:36.040 10814 ERROR neutron.service NeutronManager.get_instance() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 247, in get_instance 2018-07-16 09:56:36.040 10814 ERROR neutron.service cls._create_instance() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner 2018-07-16 09:56:36.040 10814 ERROR neutron.service return f(*args, **kwargs) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 233, in _create_instance 2018-07-16 09:56:36.040 10814 ERROR neutron.service cls._instance = cls() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 132, in __init__ 2018-07-16 09:56:36.040 10814 ERROR neutron.service plugin_provider) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 165, in _get_plugin_instance 2018-07-16 09:56:36.040 10814 ERROR neutron.service plugin_class = self.load_class_for_provider(namespace, plugin_provider) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 162, in load_class_for_provider 2018-07-16 09:56:36.040 10814 ERROR neutron.service LOG.error("Plugin '%s' not found.", plugin_provider) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-07-16 09:56:36.040 10814 ERROR neutron.service self.force_reraise() 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-07-16 09:56:36.040 10814 ERROR neutron.service six.reraise(self.type_, self.value, self.tb) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron/manager.py", line 159, in load_class_for_provider 2018-07-16 09:56:36.040 10814 ERROR neutron.service plugin_provider) 2018-07-16 09:56:36.040 10814 ERROR neutron.service File "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 58, in load_class_by_alias_or_classname 2018-07-16 09:56:36.040 10814 ERROR neutron.service raise ImportError(_("Class not found.")) 2018-07-16 09:56:36.040 10814 ERROR neutron.service ImportError: Class not found. 2018-07-16 09:56:36.040 10814 ERROR neutron.service 2018-07-16 09:56:36.042 10814 CRITICAL neutron [-] Unhandled error: ImportError: Class not found. the firewall problem has been resolved after installing the package Thanks, Enoch On Sat, Jul 14, 2018 at 12:05 AM Tong Liu wrote: > Hi Enoch, > > There are two issues here. > 1. Plugin 'vmware_nsx.plugin.NsxDvsPlugin' cannot be found. > This could be resolved by changing core_plugin to 'vmware_nsxv' as the > entry point for vmware_nsxv is defined as vmware_nsxv. > 2. No module named neutron_fwaas.db.firewall > It looks like you are missing firewall module. Can you try to install > neutron_fwaas module either from rpm or from repo? > > Thanks, > Tong > > On Fri, Jul 13, 2018 at 4:10 AM Enoch Huangfu > wrote: > >> env: >> openstack queen version on centos7 >> latest vmware_nsx plugin rpm >> installed: python-networking-vmware-nsx-12.0.1 >> >> when i modify 'core_plugin' value in [default] section of >> /etc/neutron/neutron.conf from ml2 to vmware_nsx.plugin.NsxDvsPlugin, then >> try to start neutron-server with command 'systemctl start neutron-server' >> on control node, the log shows: >> >> 2018-07-13 17:57:50.802 25653 INFO neutron.manager [-] Loading core >> plugin: vmware_nsx.plugin.NsxDvsPlugin >> 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_create >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_update >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.017 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_delete >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.366 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: >> router_gateway before_create subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.393 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_create >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_update >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.394 25653 DEBUG neutron_lib.callbacks.manager [-] >> Subscribe: > > rbac-policy before_delete >> subscribe >> /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:41 >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime [-] Error >> loading class by alias: NoMatches: No 'neutron.core_plugins' driver found, >> looking for 'vmware_nsx.plugin.NsxDvsPlugin' >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime Traceback >> (most recent call last): >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 46, >> in load_class_by_alias_or_classname >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime >> namespace, name, warn_on_missing_entrypoint=False) >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 61, in __init__ >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime >> warn_on_missing_entrypoint=warn_on_missing_entrypoint >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/stevedore/named.py", line 89, in __init__ >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime >> self._init_plugins(extensions) >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 113, in >> _init_plugins >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime >> (self.namespace, name)) >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime NoMatches: >> No 'neutron.core_plugins' driver found, looking for >> 'vmware_nsx.plugin.NsxDvsPlugin' >> 2018-07-13 17:57:51.442 25653 ERROR neutron_lib.utils.runtime >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime [-] Error >> loading class by class name: ImportError: No module named >> neutron_fwaas.db.firewall >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime Traceback >> (most recent call last): >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/neutron_lib/utils/runtime.py", line 52, >> in load_class_by_alias_or_classname >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime >> class_to_load = importutils.import_class(name) >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 30, in >> import_class >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime >> __import__(mod_str) >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/vmware_nsx/plugin.py", line 24, in >> >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from >> vmware_nsx.plugins.nsx import plugin as nsx >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx/plugin.py", line >> 64, in >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from >> vmware_nsx.plugins.nsx_v import plugin as v >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/vmware_nsx/plugins/nsx_v/plugin.py", line >> 145, in >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from >> vmware_nsx.services.fwaas.nsx_v import fwaas_callbacks >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/nsx_v/fwa >> as_callbacks.py", line 19, in >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from >> vmware_nsx.services.fwaas.common import fwaas_callbacks_v1 as com_c >> lbcks >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime File >> "/usr/lib/python2.7/site-packages/vmware_nsx/services/fwaas/common/fw >> aas_callbacks_v1.py", line 21, in >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime from >> neutron_fwaas.db.firewall import firewall_db # noqa >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime >> ImportError: No module named neutron_fwaas.db.firewall >> 2018-07-13 17:57:51.443 25653 ERROR neutron_lib.utils.runtime >> 2018-07-13 17:57:51.445 25653 ERROR neutron.manager [-] Plugin >> 'vmware_nsx.plugin.NsxDvsPlugin' not found. >> 2018-07-13 17:57:51.446 25653 DEBUG oslo_concurrency.lockutils [-] Lock >> "manager" released by "neutron.manager._create_instance" :: held 0 >> .644s inner >> /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285 >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service [-] Unrecoverable >> error: please check log for details.: ImportError: Class not found. >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service Traceback (most >> recent call last): >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service File >> "/usr/lib/python2.7/site-packages/neutron/service.py", line 86, in >> serve_wsgi >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service service.start() >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service File >> "/usr/lib/python2.7/site-packages/neutron/service.py", line 62, in start >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service self.wsgi_app = >> _run_wsgi(self.app_name) >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service File >> "/usr/lib/python2.7/site-packages/neutron/service.py", line 289, in >> _run_wsgi >> 2018-07-13 17:57:51.446 25653 ERROR neutron.service app = >> config.load_paste_app(app_name) >> >> >> >> >> I have checked the configuration and plugin package with vmware openstack >> integration 5.0 build, seems that all things are the same, I have no idea >> now......... >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Mon Jul 16 03:57:56 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 16 Jul 2018 11:57:56 +0800 Subject: [openstack-dev] [nova]API update week 5-11 In-Reply-To: <5215aef5-bc15-7e9a-d3d6-bdcbd1d505aa@gmail.com> References: <1648c0de17c.116c7e05210664.8822607615684205579@ghanshyammann.com> <5215aef5-bc15-7e9a-d3d6-bdcbd1d505aa@gmail.com> Message-ID: Thank you very much for the review and updates during the weekends. On Sat, Jul 14, 2018 at 4:05 AM Matt Riedemann wrote: > On 7/11/2018 9:03 PM, Zhenyu Zheng wrote: > > 2. Abort live migration in queued state: > > - > https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status > > - > https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) > > < > https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+%28status:open+OR+status:merged%29 > > > > - Weekly Progress: Review is going and it is in nova runway this > > week. In API office hour, we discussed about doing the compute > > service version checks oncompute.api.py side > > than on rpc side. Dan has point of doing it on rpc side where > > migration status can changed to running. We decided to further > > discussed it on patch. > > > > > > This is my own defence, Dan's point seems to be that the actual rpc > > version pin could be set to be lower than the can_send_version even when > > the service version is new enough, so he thinks doing it in rpc is > better. > > That series is all rebased now and I'm +2 up the stack until the API > change, where I'm just +1 since I wrote the compute service version > checking part, but I think this series is ready for wider review. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Mon Jul 16 06:39:56 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 16 Jul 2018 08:39:56 +0200 Subject: [openstack-dev] [keystone] Feature Status and Exceptions In-Reply-To: References: Message-ID: <1531723196.3737161.1441873184.1A1F53B7@webmail.messagingengine.com> On Fri, Jul 13, 2018, at 9:19 PM, Lance Bragstad wrote: > Hey all, > > As noted in the weekly report [0], today is feature freeze for > keystone-related specifications. I wanted to elaborate on each > specification so that our plan is clear moving forward. > > *Unified Limits** > ** > *I propose that we issue a feature freeze exception for this work. > Mainly because the changes are relatively isolated and low-risk. The > majority of the feedback on the approach is being held up by an > interface decision, which doesn't impact users, it's certainly more of a > developer preference [1]. > > That said, I don't think it would be too ambitious to focus reviews on > this next week and iron out the last few bits well before rocky-3. > > *Default Roles** > * > The implementation to ensure each of the new defaults is available after > installing keystone is complete. We realized that incorporating those > new roles into keystone's default policies would be a lot easier after > the flask work lands [2]. Instead of doing a bunch of work to > incorporate those default and then re-doing it to accommodate flask, I > think we have a safe checkpoint where we are right now. We can use free > cycles during the RC period to queue up those implementation, mark them > with a -2, and hit the ground running in Stein. This approach feels like > the safest compromise between risk and reward. > > *Capability Lists** > * > The capability lists involves a lot of work, not just within keystone, > but also keystonemiddleware, which will freeze next week. I think it's > reasonable to say that this will be something that has to be pushed to > Stein [3]. > > *MFA Receipts** > * > Much of the code used in the existing approach uses a lot of the same > patterns from the token provider API within keystone [4]. Since the UUID > and SQL parts of the token provider API have been removed, we're also in > the middle of cleaning up a ton of technical debt in that area [5]. > Adrian seems OK giving us the opportunity to finish cleaning things up > before reworking his proposal for authentication receipts. IMO, this > seems totally reasonable since it will help us ensure the new code for > authentication receipts doesn't have the bad patterns that have plagued > us with the token provider API. > > > Does anyone have objections to any of these proposals? If not, I can > start bumping various specs to reflect the status described here. All sounds good to me, thanks for writing this up. Colleen > > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-July/132202.html > [1] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/strict-two-level-model > [2] > https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:bug/1776504 > [3] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds > [4] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/mfa-auth-receipt > [5] > https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bug/1778945 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Email had 1 attachment: > + signature.asc > 1k (application/pgp-signature) From lijie at unitedstack.com Mon Jul 16 08:48:58 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 16 Jul 2018 16:48:58 +0800 Subject: [openstack-dev] [cinder] about block device driver Message-ID: Hi,all In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py In my use case, the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, since Juno, unsatisfactory results.For data processing scenarios is always better to use local storage than any SAN/NAS solution. So I felt a great need to know why we deprecated it.If there has any better one to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can you tell me about this?Thank you very much! Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Jul 16 09:20:27 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 16 Jul 2018 11:20:27 +0200 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: Message-ID: <20180716092027.pc43radmozdgndd5@localhost> On 16/07, Rambo wrote: > Hi,all > > > In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. > > > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py > > > In my use case, the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, since Juno, unsatisfactory results.For data processing scenarios is always better to use local storage than any SAN/NAS solution. > > > So I felt a great need to know why we deprecated it.If there has any better one to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can you tell me about this?Thank you very much! > > Best Regards > Rambo Hi, If I remember correctly the driver was deprecated because it had no maintainer or CI. In Cinder we require our drivers to have both, otherwise we can't guarantee that they actually work or that anyone will fix it if it gets broken. Cheers, Gorka. From geguileo at redhat.com Mon Jul 16 09:37:50 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 16 Jul 2018 11:37:50 +0200 Subject: [openstack-dev] [openstack-community] Give cinder-backup more CPU resources In-Reply-To: References: Message-ID: <20180716093750.ngd7uekzbvrkxmve@localhost> On 06/07, Amy Marrich wrote: > Hey, > > Forwarding to the Dev list as you may get a better response from there. > > Thanks, > > Amy (spotz) > Hi, I included a good number of improvements regarding bottlenecks on the backup service in recent releases, some of them are automatic, and others need configuration tweaks. But even in older releases there are a couple of things that can be done to mitigate/fix this situation, depending on the release you are using. In general I recommend: - Using more than one process: Use the "backup_processes" option, if available, to run multiple subprocesses (like Alan suggested). If this option is not available run multiple backup services on the same machine (like Duncan suggests), but use a configuration overlay to change the host configuration option. That way you won't have problems with one service messing up the others' resources at start time. - Increase the native thread pool size: This is even more critical if you are using RBD backend: Using the "backend_native_threads_pool_size" configuration option [1], if available, we can increase the default size. If your deployment doesn't have this option, you can still use environmental variable "EVENTLET_THREADPOOL_SIZE" to set it in any release. Cheers, Gorka. [1]: https://github.com/openstack/cinder/commit/e570436d1cca5cfa89388aec8b2daa63d01d0250 > On Thu, Jul 5, 2018 at 11:30 PM, Keynes Lee/WHQ/Wistron < > Keynes_Lee at wistron.com> wrote: > > > Hi > > > > > > > > When making “cinder backup-create” > > > > We found the process “cinder-backup” use 100% util of 1 CPU core on an > > OpenStack Controller node. > > > > It not just causes a bad backup performance, also make the > > openstack-cinder-backup unstable. > > > > Especially when we make several backup at the same time. > > > > > > > > The Controller Node has 40 CPU cores. > > > > Can we assign more CPU resources to cinder-backup ? > > > > > > > > > > > > > > > > > > > > > > > > > > > > [image: cid:image007.jpg at 01D1747D.DB260110] > > > > *Keynes Lee **李* *俊* *賢* > > > > Direct: > > > > +886-2-6612-1025 > > > > Mobile: > > > > +886-9-1882-3787 > > > > Fax: > > > > +886-2-6612-1991 > > > > > > > > E-Mail: > > > > keynes_lee at wistron.com > > > > > > > > > > > > > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > > > *This email contains confidential or legally privileged information and is > > for the sole use of its intended recipient. * > > > > *Any unauthorized review, use, copying or distribution of this email or > > the content of this email is strictly prohibited.* > > > > *If you are not the intended recipient, you may reply to the sender and > > should delete this e-mail immediately.* > > > > > > *---------------------------------------------------------------------------------------------------------------------------------------------------------------* > > > > _______________________________________________ > > Community mailing list > > Community at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/community > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sgolovat at redhat.com Mon Jul 16 09:49:47 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Mon, 16 Jul 2018 11:49:47 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: Hi, On Fri, Jul 13, 2018 at 9:11 PM, Juan Antonio Osorio wrote: > Sounds good to me. Even if pacemaker is heavier, less options and > consistency is better. > > Greetings from Mexico :D Greetings from Poznań :D > > On Fri, 13 Jul 2018, 13:33 Emilien Macchi, wrote: >> >> Greetings, >> >> We have been supporting both Keepalived and Pacemaker to handle VIP >> management. This is really good initiative which supports the main idea of 'simplicity'. >> Keepalived is actually the tool used by the undercloud when SSL is enabled >> (for SSL termination). >> While Pacemaker is used on the overcloud to handle VIPs but also services >> HA. >> >> I see some benefits at removing support for keepalived and deploying >> Pacemaker by default: >> - pacemaker can be deployed on one node (we actually do it in CI), so can >> be deployed on the undercloud to handle VIPs and manage HA as well. Additionally, undercloud services may be done HA on 3 nodes if/when it's really required. >> - it'll allow to extend undercloud & standalone use cases to support >> multinode one day, with HA and SSL, like we already have on the overcloud. >> - it removes the complexity of managing two tools so we'll potentially >> removing code in TripleO. ++ >> - of course since pacemaker features from overcloud would be usable in >> standalone environment, but also on the undercloud. The same OCF scripts will be used for undercloud and overcloud. >> >> There is probably some downside, the first one is I think Keepalived is >> much more lightweight than Pacemaker, we probably need to run some benchmark >> here and make sure we don't make the undercloud heavier than it is now. >From other perspective operator need to learn/support 2 tools. >> >> I went ahead and created this blueprint for Stein: >> >> https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default >> I also plan to prototype some basic code soon and provide an upgrade path >> if we accept this blueprint. I would like to participate in this initiative as I found it very valuable. >> >> This is something I would like to discuss here and at the PTG, feel free >> to bring questions/concerns, >> Thanks! >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From lijie at unitedstack.com Mon Jul 16 09:53:03 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 16 Jul 2018 17:53:03 +0800 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716092027.pc43radmozdgndd5@localhost> References: <20180716092027.pc43radmozdgndd5@localhost> Message-ID: Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? ------------------ Original ------------------ From: "Gorka Eguileor"; Date: 2018年7月16日(星期一) 下午5:20 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver On 16/07, Rambo wrote: > Hi,all > > > In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. > > > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py > > > In my use case, the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, since Juno, unsatisfactory results.For data processing scenarios is always better to use local storage than any SAN/NAS solution. > > > So I felt a great need to know why we deprecated it.If there has any better one to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can you tell me about this?Thank you very much! > > Best Regards > Rambo Hi, If I remember correctly the driver was deprecated because it had no maintainer or CI. In Cinder we require our drivers to have both, otherwise we can't guarantee that they actually work or that anyone will fix it if it gets broken. Cheers, Gorka. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Mon Jul 16 10:18:22 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Mon, 16 Jul 2018 12:18:22 +0200 Subject: [openstack-dev] [Tripleo] New validation: ensure we actually have enough disk space on the undercloud In-Reply-To: <008b5fb0-002e-801c-8328-e1ae8ed911cb@redhat.com> References: <008b5fb0-002e-801c-8328-e1ae8ed911cb@redhat.com> Message-ID: Hi, On Thu, Jul 12, 2018 at 4:45 PM, Cédric Jeanneret wrote: > Dear Stackers, > > I'm currently looking for some inputs in order to get a new validation, > ran as a "preflight check" on the undercloud. > > The aim is to ensure we actually have enough disk space for all the > files and, most importantly, the registry, being local on the > undercloud, or remote (provided the operator has access to it, of course). > > Although the doc talks about minimum requirements, there's the "never > trust the user inputs" law, so it would be great to ensure the user > didn't overlook the requirements regarding disk space. You may check disk space before undercloud deployment. All we need to do is to ensure there is enough space for particular version of undercloud. Also there should be upgrade check to validate space before running undercloud upgrade. > > The "right" way would be to add a new validation directly in the > tripleo-validations repository, and run it at an early stage of the > undercloud deployment (and maybe once again before the overcloud deploy > starts, as disk space will probably change due to the registry and logs > and packages and so on). Since validations are simple ansible tasks so you may invoke them before installing undercloud. However, we may need additional tag for undercloud checks > > There are a few details on this public trello card: > https://trello.com/c/QqBsMmP9/89-implement-storage-space-checks > > What do you think? Care to provide some hints and tips for the correct > implementation? > > Thank you! > > Bests, > > C. > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From cjeanner at redhat.com Mon Jul 16 10:39:29 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 16 Jul 2018 12:39:29 +0200 Subject: [openstack-dev] [Tripleo] New validation: ensure we actually have enough disk space on the undercloud In-Reply-To: References: <008b5fb0-002e-801c-8328-e1ae8ed911cb@redhat.com> Message-ID: On 07/16/2018 12:18 PM, Sergii Golovatiuk wrote: > Hi, > > On Thu, Jul 12, 2018 at 4:45 PM, Cédric Jeanneret wrote: >> Dear Stackers, >> >> I'm currently looking for some inputs in order to get a new validation, >> ran as a "preflight check" on the undercloud. >> >> The aim is to ensure we actually have enough disk space for all the >> files and, most importantly, the registry, being local on the >> undercloud, or remote (provided the operator has access to it, of course). >> >> Although the doc talks about minimum requirements, there's the "never >> trust the user inputs" law, so it would be great to ensure the user >> didn't overlook the requirements regarding disk space. > > You may check disk space before undercloud deployment. All we need to > do is to ensure there is enough space for particular version of > undercloud. Also there should be upgrade check to validate space > before running undercloud upgrade. Yep, that's the intend. According to the requirements, 60G minimum are required for the undercloud prior to deploy - and there's a check in tripleo-validations for upgrade, asking for 10G. > >> >> The "right" way would be to add a new validation directly in the >> tripleo-validations repository, and run it at an early stage of the >> undercloud deployment (and maybe once again before the overcloud deploy >> starts, as disk space will probably change due to the registry and logs >> and packages and so on). > > Since validations are simple ansible tasks so you may invoke them > before installing undercloud. However, we may need additional tag for > undercloud checks Not really - in fact we can run them as ansible from the undercloud_preflight "lib" - it's already in the pipe, just have to ensure it actually works as expected. There's a tiny thing though: the "openstack-tripleo-validations" package is not installed prior the undercloud deploy, meaning we don't have access to its content - I'm currently adding this package as a dependency of python-tripleoclient: https://review.rdoproject.org/r/#/c/14847/ Once this one is merged, we will be able to call all the validations we want from this package - in addition, I've created a small wrapper allowing to call "ansible-playbook" with the correct options directly from the undercloud_preflight.py thingy. That will allow to converge some requirements, for example the RAM check is wrong in the undercloud_preflight (checking for 8G only) but correct in the ansible check (asking for 16G). The card has been updated accordingly. Thank you for the answers, in and off-list :). Cheers, C. > >> >> There are a few details on this public trello card: >> https://trello.com/c/QqBsMmP9/89-implement-storage-space-checks >> >> What do you think? Care to provide some hints and tips for the correct >> implementation? >> >> Thank you! >> >> Bests, >> >> C. >> >> >> >> -- >> Cédric Jeanneret >> Software Engineer >> DFG:DF >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mark at stackhpc.com Mon Jul 16 11:09:30 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 16 Jul 2018 12:09:30 +0100 Subject: [openstack-dev] [kolla-ansible] how do I unify log data format In-Reply-To: References: Message-ID: Hi Sergey, We are using Kolla Ansible with monasca log API, and have added support for customising the fluentd configuration [1][2]. Doug Szumski (dougsz) made some changes in Queens to try to standardise the log message format. I think Kolla Ansible would benefit from some better documentation on what this format is. On the monasca log API side there is support for transforming logs using logstash. [1] https://docs.openstack.org/kolla-ansible/latest/reference/central-logging-guide.html#custom-log-forwarding [2] https://docs.openstack.org/kolla-ansible/latest/reference/central-logging-guide.html#custom-log-filtering Cheers, Mark On 14 July 2018 at 14:29, Sergey Glazyrin wrote: > Hello guys! > We are migrating our product to kolla-ansible and as far as probably you > know, it uses fluentd to control logs, etc. In non containerized openstack > we use rsyslog to send data to logstash. We get data from syslog events. It > looks like it's impossible to use syslog in kolla-ansible. Unfortunately > external_syslog_server option doesn't work. Is there anyone who was able to > use it ? But, nevermind, we may use fluentd BUT.. we have one problem - > different data format for each service/container. > > So, probably the most optimal solution is to use default logging idea in > kolla-ansible. (to be honest, I am not sure... but I've no found better > option). But even with default logging idea in kolla - ansible we have one > serious problem. Fluentd has different data format for each service, for > instance, you may see this link with explanation how its designed in > kolla-ansible > https://github.com/openstack/kolla-ansible/commit/ > 3026cef7cfd1828a27e565d4211692f0ab0ce22e > there are grok patterns which parses log messages, etc > > so, we managed to put data to elasticsearch but we need to solve two > problems: > 1. unify data format for log events. We may solve it using logstash to > unify it before putting it to elasticsearch (or should we change fluentd > configs in our own version of kolla-ansible repository ? ) > For instance, we may do it using this logstash plugin > https://www.elastic.co/guide/en/logstash/2.4/plugins- > filters-mutate.html#plugins-filters-mutate-rename > > What's your suggestion ? > > > -- > Best, Sergey > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Jul 16 11:32:26 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 16 Jul 2018 13:32:26 +0200 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> Message-ID: <20180716113226.hdqpzfkeyjkpx5kn@localhost> On 16/07, Rambo wrote: > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > Hi, I'm sure the community will be happy to merge the driver back into the repository. Still, I would recommend you looking at the "How To Contribute a driver to Cinder" guide [1] and the "Third Party CI Requirement Policy" documentation [2], and then adding this topic to Wednesday's meeting [3] and go to the meeting to ensure that everybody is on board with it. Best regards, Gorka. [1]: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver [2]: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers [3]: https://etherpad.openstack.org/p/cinder-rocky-meeting-agendas > > ------------------ Original ------------------ > From: "Gorka Eguileor"; > Date: 2018年7月16日(星期一) 下午5:20 > To: "OpenStack Developmen"; > Subject: Re: [openstack-dev] [cinder] about block device driver > > > On 16/07, Rambo wrote: > > Hi,all > > > > > > In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. > > > > > > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py > > > > > > In my use case, the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, since Juno, unsatisfactory results.For data processing scenarios is always better to use local storage than any SAN/NAS solution. > > > > > > So I felt a great need to know why we deprecated it.If there has any better one to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can you tell me about this?Thank you very much! > > > > Best Regards > > Rambo > > Hi, > > If I remember correctly the driver was deprecated because it had no > maintainer or CI. In Cinder we require our drivers to have both, > otherwise we can't guarantee that they actually work or that anyone will > fix it if it gets broken. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From shardy at redhat.com Mon Jul 16 12:05:09 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 16 Jul 2018 13:05:09 +0100 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: References: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> <35724F88-1BEB-4587-A6E7-AE07A2C648FC@rm.ht> Message-ID: On Sun, Jul 15, 2018 at 7:50 PM, Samuel Monderer wrote: > > Hi Remo, > > Attached are templates I used for the deployment. They are based on a deployment we did with OSP11. > I made the changes for it to work with OSP13. > > I do think it's the roles_data.yaml file that is causing the error because if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the deployment script the deployment passes the point it was failing before but fails much later because of the missing definition of the role. I can't see a problem with the roles_data.yaml you provided, it seems to render ok using tripleo-heat-templates/tools/process-templates.py - are you sure the error isn't related to uploading the roles_data file to the swift container? I'd check basic CLI access to swift as a sanity check, e.g something like: openstack container list and writing the roles data e.g: openstack object create overcloud roles_data.yaml If that works OK then it may be an haproxy timeout - you are specifying quite a lot of roles, so I wonder if something is timing out during the plan creation phase - we had some similar issues in CI ref https://bugs.launchpad.net/tripleo-quickstart/+bug/1638908 where increasing the haproxy timeouts helped. Steve From sfinucan at redhat.com Mon Jul 16 12:40:15 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 16 Jul 2018 13:40:15 +0100 Subject: [openstack-dev] [nova] Reminder on nova-network API removal work In-Reply-To: References: Message-ID: On Fri, 2018-07-13 at 15:23 -0500, Matt Riedemann wrote: > There are currently no open changes for the nova-network API removal > tracked here [1] but there are at least two low-hanging fruit APIs to > remove: > > * os-floating-ips-bulk > * os-floating-ips-dns The two of these are done now. * https://review.openstack.org/582912 * https://review.openstack.org/582943 There's a tempest test that needs to be disabled/removed for the first one and I'm waiting on CI results for the second. Looking at the other ones to see if I can figure out if they can/should be removed. Stephen > It would be nice to at least get those removed yet before the feature > freeze. See one of the existing linked removal patches in the etherpad > for an example of how to do this, and/or read the doc [2]. > > [1] https://etherpad.openstack.org/p/nova-network-removal-rocky > [2] > https://docs.openstack.org/nova/latest/contributor/api.html#removing-deprecated-apis > From lijie at unitedstack.com Mon Jul 16 12:44:26 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 16 Jul 2018 20:44:26 +0800 Subject: [openstack-dev] =?utf-8?b?5Zue5aSNOlJlOiAgW2NpbmRlcl0gYWJvdXQg?= =?utf-8?q?block_device_driver?= Message-ID: ok,thank you --------------原始邮件-------------- 发件人:"Gorka Eguileor "; 发送时间:2018年7月16日(星期一) 晚上7:32 收件人:"OpenStack Developmen" ; 主题:Re: [openstack-dev] [cinder] about block device driver ----------------------------------- On 16/07, Rambo wrote: > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > Hi, I'm sure the community will be happy to merge the driver back into the repository. Still, I would recommend you looking at the "How To Contribute a driver to Cinder" guide [1] and the "Third Party CI Requirement Policy" documentation [2], and then adding this topic to Wednesday's meeting [3] and go to the meeting to ensure that everybody is on board with it. Best regards, Gorka. [1]: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver [2]: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers [3]: https://etherpad.openstack.org/p/cinder-rocky-meeting-agendas > > ------------------ Original ------------------ > From: "Gorka Eguileor"; > Date: 2018年7月16日(星期一) 下午5:20 > To: "OpenStack Developmen"; > Subject: Re: [openstack-dev] [cinder] about block device driver > > > On 16/07, Rambo wrote: > > Hi,all > > > > > > In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. > > > > > > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py > > > > > > In my use case, the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, since Juno, unsatisfactory results.For data processing scenarios is always better to use local storage than any SAN/NAS solution. > > > > > > So I felt a great need to know why we deprecated it.If there has any better one to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can you tell me about this?Thank you very much! > > > > Best Regards > > Rambo > > Hi, > > If I remember correctly the driver was deprecated because it had no > maintainer or CI. In Cinder we require our drivers to have both, > otherwise we can't guarantee that they actually work or that anyone will > fix it if it gets broken. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.vamsikrishna at ericsson.com Mon Jul 16 12:44:28 2018 From: a.vamsikrishna at ericsson.com (A Vamsikrishna) Date: Mon, 16 Jul 2018 12:44:28 +0000 Subject: [openstack-dev] [networking-odl] Builds are failing in Stable/pike in networking-odl Message-ID: Hi All, Builds are failing in Stable/pike in networking-odl on below review: https://review.openstack.org/#/c/582745/ looks that issue is here: http://logs.openstack.org/45/582745/5/check/networking-odl-rally-dsvm-carbon-snapshot/be4abe3/logs/devstacklog.txt.gz#_2018-07-15_18_23_41_854 There is 404 from opendaylight.org service and snapshot version is missing & only /-SNAPSHOT/maven-metadata.xml, it should be 0.8.3-SNAPSHOT or 0.9.0-SNAPSHOT This job is making use of carbon based ODL version & not able to find it. Any idea how to fix / proceed further to make stable/pike builds to be successful ? Thanks, Vamsi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Jul 16 12:50:49 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 16 Jul 2018 15:50:49 +0300 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: <92fac17b-0ca5-402b-9bc7-86ceb80d8b09@redhat.com> I'm all for it! Another benefit is better coverage for the standalone CI job(s), when it will (hopefully) become a mandatory dependency for overcloud multinode jobs. On 7/16/18 12:49 PM, Sergii Golovatiuk wrote: > Hi, > > On Fri, Jul 13, 2018 at 9:11 PM, Juan Antonio Osorio > wrote: >> Sounds good to me. Even if pacemaker is heavier, less options and >> consistency is better. >> >> Greetings from Mexico :D > > Greetings from Poznań :D > >> >> On Fri, 13 Jul 2018, 13:33 Emilien Macchi, wrote: >>> >>> Greetings, >>> >>> We have been supporting both Keepalived and Pacemaker to handle VIP >>> management. > > This is really good initiative which supports the main idea of 'simplicity'. > >>> Keepalived is actually the tool used by the undercloud when SSL is enabled >>> (for SSL termination). >>> While Pacemaker is used on the overcloud to handle VIPs but also services >>> HA. >>> >>> I see some benefits at removing support for keepalived and deploying >>> Pacemaker by default: >>> - pacemaker can be deployed on one node (we actually do it in CI), so can >>> be deployed on the undercloud to handle VIPs and manage HA as well. > > Additionally, undercloud services may be done HA on 3 nodes if/when > it's really required. > >>> - it'll allow to extend undercloud & standalone use cases to support >>> multinode one day, with HA and SSL, like we already have on the overcloud. >>> - it removes the complexity of managing two tools so we'll potentially >>> removing code in TripleO. > > ++ > >>> - of course since pacemaker features from overcloud would be usable in >>> standalone environment, but also on the undercloud. > > The same OCF scripts will be used for undercloud and overcloud. > >>> >>> There is probably some downside, the first one is I think Keepalived is >>> much more lightweight than Pacemaker, we probably need to run some benchmark >>> here and make sure we don't make the undercloud heavier than it is now. > > From other perspective operator need to learn/support 2 tools. > >>> >>> I went ahead and created this blueprint for Stein: >>> >>> https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default >>> I also plan to prototype some basic code soon and provide an upgrade path >>> if we accept this blueprint. > > I would like to participate in this initiative as I found it very valuable. > >>> >>> This is something I would like to discuss here and at the PTG, feel free >>> to bring questions/concerns, >>> Thanks! >>> -- >>> Emilien Macchi >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jaypipes at gmail.com Mon Jul 16 13:16:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Jul 2018 09:16:51 -0400 Subject: [openstack-dev] [nova][placement] placement update 18-28 Message-ID: <5453eca1-1463-80b1-071c-ed76254cfbd1@gmail.com> This is placement update 18-28, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This week I'm trying to fill Chris' esteemable shoes while he's away. # Most Important ## Reshape Provider Trees Code series: There are at least four different contributors working on various parts of the "reshape provider trees" spec implementation . Three of the four were blocked on work I was supposed to complete around the single DB transaction for modifying inventory and allocation records atomically. So, this was the focus of work for week 28, in order to unblock other contributors. Work on the primary patch in the series is ongoing, with excellent feedback and code additions from Eric Fried, Tetsuro Nakamura, Balasz Gibizer and Chris Dent. We hope to have this code merged in the next day or two. There are WIPs for the HTTP parts and the resource tracker parts, on that topic, but both of those are dependent on the DB work merging. # Important bug categories In week 27 we discovered a set of bugs related to consumers and the handling of consumer generations. Most of these have now been fixed. Here is a list of these bugs along with their status: * No ability to update consumer's project/user external ID FIX RELEASED * Possible race updating consumer's project/user NEW * default missing project/user in placement is invalid UUID FIX RELEASED * Consumers with no allocations should be auto-deleted FIX RELEASED * Auto-created consumer record not clean up after fail allocation FIX RELEASED * Making new allocation for one consumer and multiple providers gives 409 Conflict FIX RELEASED * AllocationList.delete_all() incorrectly assumes a single consumer IN PROGRESS * Consumers never get deleted FIX RELEASED * ensure-consumer gabbi test uses invalid consumer id IN PROGESS * return 404 when no consumer found in allocs IN PROGRESS (lower priority now that consumers with no allocations are auto-deleted) **DECISION MADE**: The team made a decision to automatically remove any consumer record when there were no more allocations for that consumer. Remember that for the Nova use case, a consumer is either an instance or an on-going migration. So, when an instance is terminated, the consumer record that stores attributes about the instance -- such as the project and user IDs -- is now removed. The other area of bugginess that was uncovered in week 27 and addressed in week 18 was related to various ways in which managing parents of nested providers was incorrect. Those were: * placement allows RP parent loop in PUT resource_providers/{uuid} FIX RELEASED * Child's root provider is not updated FIX RELEASED Both of those, as you can see, have been fixed. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 15, -1 on last week. * [In progress placement bugs](https://goo.gl/vzGGDQ) 14, -3 on last week. # Other The following continue to remain from the previous week and are copied verbatim from Chris' week 27 update. * Purge comp_node and res_prvdr records during deletion of cells/hosts * Get resource provider by uuid or name (osc-placement) * Tighten up ReportClient use of generation * Add unit test for non-placement resize * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams * [placement] api-ref: add traits parameter * Convert 'placement_api_docs' into a Sphinx extension * Test for multiple limit/group_policy qparams * Disable limits if force_hosts or force_nodes is set * Rename auth_uri to www_authenticate_uri * Blazar's work on using placement Best, -jay From michele at acksyn.org Mon Jul 16 13:27:23 2018 From: michele at acksyn.org (Michele Baldessari) Date: Mon, 16 Jul 2018 15:27:23 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: <20180716132723.GA4445@palahniuk.int.rhx> Hi Emilien, On Fri, Jul 13, 2018 at 02:33:02PM -0400, Emilien Macchi wrote: > We have been supporting both Keepalived and Pacemaker to handle VIP > management. Keepalived is actually the tool used by the undercloud > when SSL is enabled (for SSL termination). While Pacemaker is used on > the overcloud to handle VIPs but also services HA. > > I see some benefits at removing support for keepalived and deploying > Pacemaker by default: > - pacemaker can be deployed on one node (we actually do it in CI), so can > be deployed on the undercloud to handle VIPs and manage HA as well. > - it'll allow to extend undercloud & standalone use cases to support > multinode one day, with HA and SSL, like we already have on the overcloud. > - it removes the complexity of managing two tools so we'll potentially > removing code in TripleO. > - of course since pacemaker features from overcloud would be usable in > standalone environment, but also on the undercloud. > > There is probably some downside, the first one is I think Keepalived is > much more lightweight than Pacemaker, we probably need to run some > benchmark here and make sure we don't make the undercloud heavier than it > is now. Right, I think the service startup of pacemaker/corosync + starting the VIP will be a bit slower than a keepalived approach (seconds). But after that there should not be a lot of difference. Also, we're about to land proper support for updating pcmk resources so in the future managing them will also be a bit simpler than it is now. > I went ahead and created this blueprint for Stein: > https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default > I also plan to prototype some basic code soon and provide an upgrade path > if we accept this blueprint. I like the approach for the reasons you mention above. I'll be happy to chat about this at the PTG and help out in general with this. thanks, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From sean.mcginnis at gmx.com Mon Jul 16 13:32:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 16 Jul 2018 08:32:37 -0500 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716113226.hdqpzfkeyjkpx5kn@localhost> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> Message-ID: <20180716133237.GA19698@sm-workstation> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > On 16/07, Rambo wrote: > > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > > > > Hi, > > I'm sure the community will be happy to merge the driver back into the > repository. > The other reason for its removal was its inability to meet the minimum feature set required for Cinder drivers along with benchmarks showing the LVM and iSCSI driver could be tweaked to have similar or better performance. The other option would be to not use Cinder volumes so you just use local storage on your compute nodes. Readding the block device driver is not likely an option. From jaypipes at gmail.com Mon Jul 16 13:43:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Jul 2018 09:43:37 -0400 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716133237.GA19698@sm-workstation> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: <96bd102c-047f-fc4c-b229-b14ed5e24453@gmail.com> On 07/16/2018 09:32 AM, Sean McGinnis wrote: > The other option would be to not use Cinder volumes so you just use local > storage on your compute nodes. ^^ yes, this. -jay From jaypipes at gmail.com Mon Jul 16 14:12:47 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Jul 2018 10:12:47 -0400 Subject: [openstack-dev] [placement] Low hanging fruit bug for interested newcomers Message-ID: <152f6537-46e9-6a78-b8ba-fe855911fae0@gmail.com> Hi all, Here's a testing and documentation bug that would be great for newcomers to the placement project: https://bugs.launchpad.net/nova/+bug/1781439 Come find us on #openstack-placement on Freenode IRC to chat about it if you're interested! Best, -jay From Arkady.Kanevsky at dell.com Mon Jul 16 14:15:28 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 16 Jul 2018 14:15:28 +0000 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <96bd102c-047f-fc4c-b229-b14ed5e24453@gmail.com> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> <96bd102c-047f-fc4c-b229-b14ed5e24453@gmail.com> Message-ID: <1755f9833c3a495c9f983f6aa2b781a2@AUSX13MPS308.AMER.DELL.COM> Is this for ephemeral storage handling? -----Original Message----- From: Jay Pipes [mailto:jaypipes at gmail.com] Sent: Monday, July 16, 2018 8:44 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [cinder] about block device driver On 07/16/2018 09:32 AM, Sean McGinnis wrote: > The other option would be to not use Cinder volumes so you just use > local storage on your compute nodes. ^^ yes, this. -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Jul 16 14:17:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Jul 2018 10:17:43 -0400 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <1755f9833c3a495c9f983f6aa2b781a2@AUSX13MPS308.AMER.DELL.COM> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> <96bd102c-047f-fc4c-b229-b14ed5e24453@gmail.com> <1755f9833c3a495c9f983f6aa2b781a2@AUSX13MPS308.AMER.DELL.COM> Message-ID: <82d2ee56-77ed-8bb6-5316-85fa8eaa9b2c@gmail.com> On 07/16/2018 10:15 AM, Arkady.Kanevsky at dell.com wrote: > Is this for ephemeral storage handling? For both ephemeral as well as root disk. In other words, just act like Cinder isn't there and attach a big local root disk to the instance. Best, -jay > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Monday, July 16, 2018 8:44 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [cinder] about block device driver > > On 07/16/2018 09:32 AM, Sean McGinnis wrote: >> The other option would be to not use Cinder volumes so you just use >> local storage on your compute nodes. > > ^^ yes, this. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From a.vamsikrishna at ericsson.com Mon Jul 16 15:04:00 2018 From: a.vamsikrishna at ericsson.com (A Vamsikrishna) Date: Mon, 16 Jul 2018 15:04:00 +0000 Subject: [openstack-dev] [networking-odl] Builds are failing in Stable/pike in networking-odl In-Reply-To: References: Message-ID: +Isaku Hi Isaku, I found the reason for the build failure. below path it should be distribution-artifacts instead of distribution-karaf https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/distribution-karaf/-SNAPSHOT/maven-metadata.xml Line no: 7 is causing the problem https://github.com/openstack/networking-odl/blob/stable/pike/devstack/functions >From logs: http://logs.openstack.org/45/582745/5/check/networking-odl-rally-dsvm-carbon-snapshot/be4abe3/logs/devstacklog.txt.gz#_2018-07-15_18_23_41_854 opt/stack/new/networking-odl/devstack/functions:_odl_nexus_path:7 : echo https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/distribution-karaf I think below code needs a fix, Can you please help us out ? https://github.com/openstack/networking-odl/blob/stable/pike/devstack/settings.odl#L72-L81 case "$ODL_RELEASE" in latest-snapshot|nitrogen-snapshot-0.7*) # use karaf because distribution-karaf isn't available for Nitrogen at the moment # TODO(yamahata): when distriution-karaf is available, remove this ODL_URL_DISTRIBUTION_KARAF_PATH=${ODL_URL_DISTRIBUTION_KARAF_PATH:-org/opendaylight/integration/karaf} ;; *) ODL_URL_DISTRIBUTION_KARAF_PATH=${ODL_URL_DISTRIBUTION_KARAF_PATH:-org/opendaylight/integration/distribution-karaf} ;; Esac Thanks, Vamsi From: A Vamsikrishna Sent: Monday, July 16, 2018 6:14 PM To: 'openstack-dev at lists.openstack.org' ; openstack at lists.openstack.org Subject: [networking-odl] Builds are failing in Stable/pike in networking-odl Hi All, Builds are failing in Stable/pike in networking-odl on below review: https://review.openstack.org/#/c/582745/ looks that issue is here: http://logs.openstack.org/45/582745/5/check/networking-odl-rally-dsvm-carbon-snapshot/be4abe3/logs/devstacklog.txt.gz#_2018-07-15_18_23_41_854 There is 404 from opendaylight.org service and snapshot version is missing & only /-SNAPSHOT/maven-metadata.xml, it should be 0.8.3-SNAPSHOT or 0.9.0-SNAPSHOT This job is making use of carbon based ODL version & not able to find it. Any idea how to fix / proceed further to make stable/pike builds to be successful ? Thanks, Vamsi -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at gmail.com Mon Jul 16 15:08:45 2018 From: frode.nordahl at gmail.com (Frode Nordahl) Date: Mon, 16 Jul 2018 17:08:45 +0200 Subject: [openstack-dev] [charms][ptg] Stein PTG planning etherpad for Charms Message-ID: Hello Charmers, A etherpad for planning of the upcoming PTG in Denver has been created [0]. Please make a note signalling your attendance and any topics you want covered. 0: https://etherpad.openstack.org/p/charms-stein-ptg -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Jul 16 15:14:23 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 16 Jul 2018 10:14:23 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation Message-ID: <5B4CB64F.4060602@openstack.org> Hi all - We have both of the current whitepapers up and available for translation. Can we promote these on the Zanata homepage? https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 Thanks all! Jimmy From jimmy at openstack.org Mon Jul 16 15:26:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 16 Jul 2018 10:26:55 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4CB64F.4060602@openstack.org> References: <5B4CB64F.4060602@openstack.org> Message-ID: <5B4CB93F.6070202@openstack.org> Sorry, I should have also added... we additionally need permissions so that we can add the a new version of the pot file to this project: https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 Thanks! Jimmy Jimmy McArthur wrote: > Hi all - > > We have both of the current whitepapers up and available for > translation. Can we promote these on the Zanata homepage? > > https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 > > https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 > > > Thanks all! > Jimmy From cjeanner at redhat.com Mon Jul 16 15:27:08 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 16 Jul 2018 17:27:08 +0200 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" Message-ID: Dear Stackers, In order to let operators properly validate their undercloud node, I propose to create a new subcommand in the "openstack undercloud" "tree": `openstack undercloud validate' This should only run the different validations we have in the undercloud_preflight.py¹ That way, an operator will be able to ensure all is valid before starting "for real" any other command like "install" or "upgrade". Of course, this "validate" step is embedded in the "install" and "upgrade" already, but having the capability to just validate without any further action is something that can be interesting, for example: - ensure the current undercloud hardware/vm is sufficient for an update - ensure the allocated VM for the undercloud is sufficient for a deploy - and so on There are probably other possibilities, if we extend the "validation" scope outside the "undercloud" (like, tripleo, allinone, even overcloud). What do you think? Any pros/cons/thoughts? Cheers, C. ¹ http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From dprince at redhat.com Mon Jul 16 15:32:26 2018 From: dprince at redhat.com (Dan Prince) Date: Mon, 16 Jul 2018 11:32:26 -0400 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: References: Message-ID: On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret wrote: > > Dear Stackers, > > In order to let operators properly validate their undercloud node, I > propose to create a new subcommand in the "openstack undercloud" "tree": > `openstack undercloud validate' > > This should only run the different validations we have in the > undercloud_preflight.py¹ > That way, an operator will be able to ensure all is valid before > starting "for real" any other command like "install" or "upgrade". > > Of course, this "validate" step is embedded in the "install" and > "upgrade" already, but having the capability to just validate without > any further action is something that can be interesting, for example: > > - ensure the current undercloud hardware/vm is sufficient for an update > - ensure the allocated VM for the undercloud is sufficient for a deploy > - and so on > > There are probably other possibilities, if we extend the "validation" > scope outside the "undercloud" (like, tripleo, allinone, even overcloud). > > What do you think? Any pros/cons/thoughts? I think this command could be very useful. I'm assuming the underlying implementation would call a 'heat stack-validate' using an ephemeral heat-all instance. If so way we implement it for the undercloud vs the 'standalone' use case would likely be a bit different. We can probably subclass the implementations to share common code across the efforts though. For the undercloud you are likely to have a few extra 'local only' validations. Perhaps extra checks for things on the client side. For the all-in-one I had envisioned using the output from the 'heat stack-validate' to create a sample config file for a custom set of services. Similar to how tools like Packstack generate a config file for example. Dan > > Cheers, > > C. > > > > ¹ > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Mon Jul 16 15:40:26 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 17 Jul 2018 00:40:26 +0900 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4CB93F.6070202@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> Message-ID: Jimmy, Does it mean translation **publishing** is ready now? In the first translation of the edge computing whitepaper, translation publishing was completely a manual process and it takes too long. If translation publishing support is not ready, it is a bit discouraging from translator perspective. We translators would like to have some automated way to publish translated versions of documents and it allows translators to improve and/or fix translations after the initial translation. I propose an idea of using RST file for the edge computing whitepaper after the Vancouver summit. Ildiko said it is being discussed in the edge computing team and the foundation but I haven't heard nothing since then. Or can the foundation publish translations more quickly even though the current publishing process is not changed. If not, I cannot believe this is the right timing to promote whitepaper translations..... Thanks, Akihiro 2018年7月17日(火) 0:27 Jimmy McArthur : > Sorry, I should have also added... we additionally need permissions so > that we can add the a new version of the pot file to this project: > > https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 > > Thanks! > Jimmy > > > > Jimmy McArthur wrote: > > Hi all - > > > > We have both of the current whitepapers up and available for > > translation. Can we promote these on the Zanata homepage? > > > > > https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 > > > > > https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 > > > > > > Thanks all! > > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Mon Jul 16 15:41:43 2018 From: dprince at redhat.com (Dan Prince) Date: Mon, 16 Jul 2018 11:41:43 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: On Fri, Jul 13, 2018 at 2:34 PM Emilien Macchi wrote: > > Greetings, > > We have been supporting both Keepalived and Pacemaker to handle VIP management. > Keepalived is actually the tool used by the undercloud when SSL is enabled (for SSL termination). > While Pacemaker is used on the overcloud to handle VIPs but also services HA. > > I see some benefits at removing support for keepalived and deploying Pacemaker by default: > - pacemaker can be deployed on one node (we actually do it in CI), so can be deployed on the undercloud to handle VIPs and manage HA as well. > - it'll allow to extend undercloud & standalone use cases to support multinode one day, with HA and SSL, like we already have on the overcloud. > - it removes the complexity of managing two tools so we'll potentially removing code in TripleO. > - of course since pacemaker features from overcloud would be usable in standalone environment, but also on the undercloud. > > There is probably some downside, the first one is I think Keepalived is much more lightweight than Pacemaker, we probably need to run some benchmark here and make sure we don't make the undercloud heavier than it is now. The biggest downside IMO is the fact that our Pacemaker integration is not containerized. Nor are there any plans to finish the containerization of it. Pacemaker has to currently run on baremetal and this makes the installation of it for small dev/test setups a lot less desirable. It can launch containers just fine but the pacemaker installation itself is what concerns me for the long term. Until we have plans for containizing it I suppose I would rather see us keep keepalived as an option for these smaller setups. We can certainly change our default Undercloud to use Pacemaker (if we choose to do so). But having keepalived around for "lightweight" (zero or low footprint) installs that work is really quite desirable. Dan > > I went ahead and created this blueprint for Stein: > https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default > I also plan to prototype some basic code soon and provide an upgrade path if we accept this blueprint. > > This is something I would like to discuss here and at the PTG, feel free to bring questions/concerns, > Thanks! > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Mon Jul 16 15:46:47 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 16 Jul 2018 10:46:47 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> Message-ID: <5B4CBDE7.3020600@openstack.org> Akihiro Motoki wrote: > Jimmy, > > Does it mean translation **publishing** is ready now? > > In the first translation of the edge computing whitepaper, translation > publishing was completely a manual process and it takes too long. The initial version was a volunteer effort, but the Foundation wasn't aware it was taking place until after it was all done. Our efforts are aligned around working through standard Zanata publishing processes. > If translation publishing support is not ready, it is a bit > discouraging from translator perspective. > We translators would like to have some automated way to publish > translated versions of documents > and it allows translators to improve and/or fix translations after the > initial translation. Translation publishing is now available. We just need permissions to add the pot file to the project. Moving forward, the process will be automated through Zanata in the same way we do the user survey and other features. > > I propose an idea of using RST file for the edge computing whitepaper > after the Vancouver summit. Yes, we're following the standard publishing requirements outlined here: https://docs.openstack.org/i18n/latest/en_GB/tools.html > Ildiko said it is being discussed in the edge computing team and the > foundation but I haven't heard nothing since then. > Or can the foundation publish translations more quickly even though > the current publishing process is not changed. > If not, I cannot believe this is the right timing to promote > whitepaper translations..... > > Thanks, > Akihiro > > > 2018年7月17日(火) 0:27 Jimmy McArthur >: > > Sorry, I should have also added... we additionally need > permissions so > that we can add the a new version of the pot file to this project: > https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 > > Thanks! > Jimmy > > > > Jimmy McArthur wrote: > > Hi all - > > > > We have both of the current whitepapers up and available for > > translation. Can we promote these on the Zanata homepage? > > > > > https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 > > > > > > https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 > > > > > > > Thanks all! > > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Jul 16 15:48:43 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 16 Jul 2018 10:48:43 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4CBDE7.3020600@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4CBDE7.3020600@openstack.org> Message-ID: <5B4CBE5B.4000602@openstack.org> Jimmy McArthur wrote: > > > Akihiro Motoki wrote: >> Jimmy, >> >> Does it mean translation **publishing** is ready now? >> >> In the first translation of the edge computing whitepaper, >> translation publishing was completely a manual process and it takes >> too long. > The initial version was a volunteer effort, but the Foundation wasn't > aware it was taking place until after it was all done. Our efforts > are aligned around working through standard Zanata publishing processes. Also, FTR, I believe it should be easy enough to take those initial manual translations and place them into the po files once we add the pot file to the project. >> If translation publishing support is not ready, it is a bit >> discouraging from translator perspective. >> We translators would like to have some automated way to publish >> translated versions of documents >> and it allows translators to improve and/or fix translations after >> the initial translation. > Translation publishing is now available. We just need permissions to > add the pot file to the project. Moving forward, the process will be > automated through Zanata in the same way we do the user survey and > other features. >> >> I propose an idea of using RST file for the edge computing whitepaper >> after the Vancouver summit. > Yes, we're following the standard publishing requirements outlined > here: https://docs.openstack.org/i18n/latest/en_GB/tools.html >> Ildiko said it is being discussed in the edge computing team and the >> foundation but I haven't heard nothing since then. >> Or can the foundation publish translations more quickly even though >> the current publishing process is not changed. >> If not, I cannot believe this is the right timing to promote >> whitepaper translations..... > >> >> Thanks, >> Akihiro >> >> >> 2018年7月17日(火) 0:27 Jimmy McArthur > >: >> >> Sorry, I should have also added... we additionally need >> permissions so >> that we can add the a new version of the pot file to this project: >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >> Thanks! >> Jimmy >> >> >> >> Jimmy McArthur wrote: >> > Hi all - >> > >> > We have both of the current whitepapers up and available for >> > translation. Can we promote these on the Zanata homepage? >> > >> > >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> >> > >> > >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> >> > >> > >> > Thanks all! >> > Jimmy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Jul 16 15:48:51 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 16 Jul 2018 11:48:51 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: [...] > The biggest downside IMO is the fact that our Pacemaker integration is > not containerized. Nor are there any plans to finish the > containerization of it. Pacemaker has to currently run on baremetal > and this makes the installation of it for small dev/test setups a lot > less desirable. It can launch containers just fine but the pacemaker > installation itself is what concerns me for the long term. > > Until we have plans for containizing it I suppose I would rather see > us keep keepalived as an option for these smaller setups. We can > certainly change our default Undercloud to use Pacemaker (if we choose > to do so). But having keepalived around for "lightweight" (zero or low > footprint) installs that work is really quite desirable. > That's a good point, and I agree with your proposal. Michele, what's the long term plan regarding containerized pacemaker? -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From michele at acksyn.org Mon Jul 16 18:07:19 2018 From: michele at acksyn.org (Michele Baldessari) Date: Mon, 16 Jul 2018 20:07:19 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: Message-ID: <20180716180719.GB4445@palahniuk.int.rhx> On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: > [...] > > > The biggest downside IMO is the fact that our Pacemaker integration is > > not containerized. Nor are there any plans to finish the > > containerization of it. Pacemaker has to currently run on baremetal > > and this makes the installation of it for small dev/test setups a lot > > less desirable. It can launch containers just fine but the pacemaker > > installation itself is what concerns me for the long term. > > > > Until we have plans for containizing it I suppose I would rather see > > us keep keepalived as an option for these smaller setups. We can > > certainly change our default Undercloud to use Pacemaker (if we choose > > to do so). But having keepalived around for "lightweight" (zero or low > > footprint) installs that work is really quite desirable. > > > > That's a good point, and I agree with your proposal. > Michele, what's the long term plan regarding containerized pacemaker? Well, we kind of started evaluating it (there was definitely not enough time around pike/queens as we were busy landing the bundles code), then due to discussions around k8s it kind of got off our radar. We can at least resume the discussions around it and see how much effort it would be. I'll bring it up with my team and get back to you. cheers, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From kennelson11 at gmail.com Mon Jul 16 18:12:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 16 Jul 2018 13:12:32 -0500 Subject: [openstack-dev] [requirements][storyboard] Updates between SB and LP In-Reply-To: <20180712065350.GB22285@thor.bakeyournoodle.com> References: <20180712065350.GB22285@thor.bakeyournoodle.com> Message-ID: Hey Tony :) So I think the best way to deal with it would to disable bug reporting for requirements in LP (so nothing new can get filed there and the index won't be discoverable but you can still edit the old bugs) and then to do periodic runs of the migration script to pick up any changes that have happened. As for dumping SB changes back into LP I don't know that there is a way. Maybe someone else has an idea. Hopefully by updating the description of the reqs project in LP, people will know where to look for the new/current information. Hope that helps! -Kendall (diablo_rojo) On Wed, Jul 11, 2018 at 11:54 PM Tony Breeds wrote: > Hi all, > The requirements team is only a light user of Launchpad and we're > looking at moving to StoryBoard as it looks like for the most part it'll > be a better fit. > > To date the thing that has stopped us doing this is the handling of > bugs/stories that are shared between LP and SB. > > Assume that requirements had migrated to SB, how would be deal with bugs > like: https://bugs.launchpad.net/openstack-requirements/+bug/1753969 > > Is there a, supportable, bi-directional path between SB and LP? > > I suspect the answer is No. I imagine if we only wanted to get > updates from LP reflected in our SB story we could just leave the > bug tracker open on LP and run the migration tool "often". > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jul 16 18:52:21 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 16 Jul 2018 13:52:21 -0500 Subject: [openstack-dev] [requirements][storyboard] Updates between SB and LP In-Reply-To: References: <20180712065350.GB22285@thor.bakeyournoodle.com> Message-ID: <20180716185221.ifycsdvhjn3xm5oz@gentoo.org> On 18-07-16 13:12:32, Kendall Nelson wrote: > Hey Tony :) > > So I think the best way to deal with it would to disable bug reporting for > requirements in LP (so nothing new can get filed there and the index won't > be discoverable but you can still edit the old bugs) and then to do > periodic runs of the migration script to pick up any changes that have > happened. > > As for dumping SB changes back into LP I don't know that there is a way. > Maybe someone else has an idea. Hopefully by updating the description of > the reqs project in LP, people will know where to look for the new/current > information. > > Hope that helps! > > -Kendall (diablo_rojo) > > On Wed, Jul 11, 2018 at 11:54 PM Tony Breeds > wrote: > > > Hi all, > > The requirements team is only a light user of Launchpad and we're > > looking at moving to StoryBoard as it looks like for the most part it'll > > be a better fit. > > > > To date the thing that has stopped us doing this is the handling of > > bugs/stories that are shared between LP and SB. > > > > Assume that requirements had migrated to SB, how would be deal with bugs > > like: https://bugs.launchpad.net/openstack-requirements/+bug/1753969 > > > > Is there a, supportable, bi-directional path between SB and LP? > > > > I suspect the answer is No. I imagine if we only wanted to get > > updates from LP reflected in our SB story we could just leave the > > bug tracker open on LP and run the migration tool "often". > > The feature we use most heavilly is the cross project tracking. I'm just not sure how we'd deal with that when some projects are on LP and some on SB. How would we communicate something like the pycrypto removal in that case? https://bugs.launchpad.net/openstack-requirements/+bug/1749574 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From isaku.yamahata at gmail.com Mon Jul 16 18:53:10 2018 From: isaku.yamahata at gmail.com (Isaku Yamahata) Date: Mon, 16 Jul 2018 11:53:10 -0700 Subject: [openstack-dev] [networking-odl] Builds are failing in Stable/pike in networking-odl In-Reply-To: References: Message-ID: <20180716185310.GA26673@private.email.ne.jp> Hello Vamsikrishna. Carbon snapshot hasn't been build any more and not available from opendaylight nexus server. But the carbon job tried to get carbon snapshot and filed. So the fix is to use latest(and final?) carbon release(carbon SR4) instead of carbon snapshot. Maybe you'd like to twist networking-odl/devstack/pre_test_hook.sh or networking-odl/devstack/odl-release/carbon-snapshot to point carbon SR4 release. thanks, On Mon, Jul 16, 2018 at 03:04:00PM +0000, A Vamsikrishna wrote: > +Isaku > > Hi Isaku, > > I found the reason for the build failure. below path it should be distribution-artifacts instead of distribution-karaf > > https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/distribution-karaf/-SNAPSHOT/maven-metadata.xml > > Line no: 7 is causing the problem > > https://github.com/openstack/networking-odl/blob/stable/pike/devstack/functions > > From logs: > > http://logs.openstack.org/45/582745/5/check/networking-odl-rally-dsvm-carbon-snapshot/be4abe3/logs/devstacklog.txt.gz#_2018-07-15_18_23_41_854 > > > opt/stack/new/networking-odl/devstack/functions:_odl_nexus_path:7 : echo https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/distribution-karaf > > > I think below code needs a fix, Can you please help us out ? > > https://github.com/openstack/networking-odl/blob/stable/pike/devstack/settings.odl#L72-L81 > > case "$ODL_RELEASE" in > > > latest-snapshot|nitrogen-snapshot-0.7*) > > > # use karaf because distribution-karaf isn't available for Nitrogen at the moment > > > # TODO(yamahata): when distriution-karaf is available, remove this > > > ODL_URL_DISTRIBUTION_KARAF_PATH=${ODL_URL_DISTRIBUTION_KARAF_PATH:-org/opendaylight/integration/karaf} > > > ;; > > > *) > > > ODL_URL_DISTRIBUTION_KARAF_PATH=${ODL_URL_DISTRIBUTION_KARAF_PATH:-org/opendaylight/integration/distribution-karaf} > > > ;; > > > Esac > > > > Thanks, > Vamsi > > From: A Vamsikrishna > Sent: Monday, July 16, 2018 6:14 PM > To: 'openstack-dev at lists.openstack.org' ; openstack at lists.openstack.org > Subject: [networking-odl] Builds are failing in Stable/pike in networking-odl > > Hi All, > > Builds are failing in Stable/pike in networking-odl on below review: > > https://review.openstack.org/#/c/582745/ > looks that issue is here: http://logs.openstack.org/45/582745/5/check/networking-odl-rally-dsvm-carbon-snapshot/be4abe3/logs/devstacklog.txt.gz#_2018-07-15_18_23_41_854 > There is 404 from opendaylight.org service and snapshot version is missing & only /-SNAPSHOT/maven-metadata.xml, it should be 0.8.3-SNAPSHOT or 0.9.0-SNAPSHOT > This job is making use of carbon based ODL version & not able to find it. > > Any idea how to fix / proceed further to make stable/pike builds to be successful ? > > > Thanks, > Vamsi -- Isaku Yamahata From manjeet.s.bhatia at intel.com Tue Jul 17 00:27:18 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Tue, 17 Jul 2018 00:27:18 +0000 Subject: [openstack-dev] [neutron] Bug deputy report 07/10/2018 - 07/16/2018 Message-ID: Hi, There were total of 6 new bugs reported. I guess everyone was busy following soccer finals, so not a huge amount of bugs I see were Reported. The only Bug which I marked High priority was a documentation, which lacked the information about allowed dscp marking Values. High Priority 1. https://bugs.launchpad.net/neutron/+bug/1781915 QoS (DSCP Mark IDs) - No correlation between the implemented functionality and design looks like this bug is due to incomplete documentation, I marked it as high priority, since document is confusing operators about allowed dscp marking. Some fixes to docs were proposed https://review.openstack.org/#/c/582979/ , https://review.openstack.org/#/c/582974/ Bugs of Medium importance 2. https://bugs.launchpad.net/neutron/+bug/1781129 linuxbridge-agent missed updated device sometimes fix to this also proposed https://review.openstack.org/#/c/582084/ 3. https://bugs.launchpad.net/neutron/+bug/1781179 [RFE] Send "update" instead of "remove" notification for dvr rescheduling This was reported as RFE, however it didn't look like rfe to me, so I removed the tag, and marked it as medium Patch was proposed and approved (https://review.openstack.org/#/c/581658/ ) Bugs Need more discussion 4. https://bugs.launchpad.net/neutron/+bug/1781354 VPNaaS: IPsec siteconnection status DOWN while using IKE v2 I couldn't confirm this bug, since I did not have VPnaas setup locally, but this need more discussion and a fix Is also proposed https://review.openstack.org/#/c/582113/ 5. https://bugs.launchpad.net/neutron/+bug/1782026 Routed provider network - DHCP agent failure I think someone expert in routed provider network can take a look and confirm this. Bugs marked Invalid or Incomplete 6. https://bugs.launchpad.net/neutron/+bug/1782001 marked invalid Thanks and Regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Jul 17 03:04:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 16 Jul 2018 23:04:13 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <20180716180719.GB4445@palahniuk.int.rhx> References: <20180716180719.GB4445@palahniuk.int.rhx> Message-ID: Thanks everyone for the feedback, I've made a quick PoC: https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default And I'm currently doing local testing. I'll publish results when progress is made, but I've made it so we have the choice to enable pacemaker (disabled by default), where keepalived would remain the default for now. On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari wrote: > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: > > [...] > > > > > The biggest downside IMO is the fact that our Pacemaker integration is > > > not containerized. Nor are there any plans to finish the > > > containerization of it. Pacemaker has to currently run on baremetal > > > and this makes the installation of it for small dev/test setups a lot > > > less desirable. It can launch containers just fine but the pacemaker > > > installation itself is what concerns me for the long term. > > > > > > Until we have plans for containizing it I suppose I would rather see > > > us keep keepalived as an option for these smaller setups. We can > > > certainly change our default Undercloud to use Pacemaker (if we choose > > > to do so). But having keepalived around for "lightweight" (zero or low > > > footprint) installs that work is really quite desirable. > > > > > > > That's a good point, and I agree with your proposal. > > Michele, what's the long term plan regarding containerized pacemaker? > > Well, we kind of started evaluating it (there was definitely not enough > time around pike/queens as we were busy landing the bundles code), then > due to discussions around k8s it kind of got off our radar. We can > at least resume the discussions around it and see how much effort it > would be. I'll bring it up with my team and get back to you. > > cheers, > Michele > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Jul 17 03:17:52 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 17 Jul 2018 11:17:52 +0800 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716133237.GA19698@sm-workstation> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: But I want to create a volume backed server for data processing scenarios,maybe the BlockDeviceDriver is more suitable. ------------------ Original ------------------ From: "Sean McGinnis"; Date: 2018年7月16日(星期一) 晚上9:32 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > On 16/07, Rambo wrote: > > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > > > > Hi, > > I'm sure the community will be happy to merge the driver back into the > repository. > The other reason for its removal was its inability to meet the minimum feature set required for Cinder drivers along with benchmarks showing the LVM and iSCSI driver could be tweaked to have similar or better performance. The other option would be to not use Cinder volumes so you just use local storage on your compute nodes. Readding the block device driver is not likely an option. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Tue Jul 17 04:57:00 2018 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 17 Jul 2018 07:57:00 +0300 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: References: Message-ID: We should also add support for the openstack client to launch the other validators that are used in the GUI. There are validators for the overcloud as well, and new validators are added all the time. These validators are installed under /usr/share/openstack-tripleo-validations/validations/ and they're launched by the command: ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/openstack-tripleo-validations/validations/<> Cedric, feel free to open an RFE. Regards, Udi Kalifon; Senior QE; RHOS-UI Automation On Mon, Jul 16, 2018 at 6:32 PM, Dan Prince wrote: > On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret > wrote: > > > > Dear Stackers, > > > > In order to let operators properly validate their undercloud node, I > > propose to create a new subcommand in the "openstack undercloud" "tree": > > `openstack undercloud validate' > > > > This should only run the different validations we have in the > > undercloud_preflight.py¹ > > That way, an operator will be able to ensure all is valid before > > starting "for real" any other command like "install" or "upgrade". > > > > Of course, this "validate" step is embedded in the "install" and > > "upgrade" already, but having the capability to just validate without > > any further action is something that can be interesting, for example: > > > > - ensure the current undercloud hardware/vm is sufficient for an update > > - ensure the allocated VM for the undercloud is sufficient for a deploy > > - and so on > > > > There are probably other possibilities, if we extend the "validation" > > scope outside the "undercloud" (like, tripleo, allinone, even overcloud). > > > > What do you think? Any pros/cons/thoughts? > > I think this command could be very useful. I'm assuming the underlying > implementation would call a 'heat stack-validate' using an ephemeral > heat-all instance. If so way we implement it for the undercloud vs the > 'standalone' use case would likely be a bit different. We can probably > subclass the implementations to share common code across the efforts > though. > > For the undercloud you are likely to have a few extra 'local only' > validations. Perhaps extra checks for things on the client side. > > For the all-in-one I had envisioned using the output from the 'heat > stack-validate' to create a sample config file for a custom set of > services. Similar to how tools like Packstack generate a config file > for example. > > Dan > > > > > Cheers, > > > > C. > > > > > > > > ¹ > > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/ > tripleoclient/v1/undercloud_preflight.py > > -- > > Cédric Jeanneret > > Software Engineer > > DFG:DF > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pablo.Iranzo at redhat.com Tue Jul 17 05:32:47 2018 From: Pablo.Iranzo at redhat.com (Pablo Iranzo =?iso-8859-1?Q?G=F3mez?=) Date: Tue, 17 Jul 2018 07:32:47 +0200 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: Message-ID: <20180717053247.GR6658@redhat.com> Hi > Dear Stackers, > > In order to let operators properly validate their undercloud node, I > propose to create a new subcommand in the "openstack undercloud" "tree": > `openstack undercloud validate' > > This should only run the different validations we have in the > undercloud_preflight.py > That way, an operator will be able to ensure all is valid before > starting "for real" any other command like "install" or "upgrade". > > Of course, this "validate" step is embedded in the "install" and > "upgrade" already, but having the capability to just validate without > any further action is something that can be interesting, for example: > > - ensure the current undercloud hardware/vm is sufficient for an update > - ensure the allocated VM for the undercloud is sufficient for a deploy > - and so on > > There are probably other possibilities, if we extend the "validation" > scope outside the "undercloud" (like, tripleo, allinone, even overcloud). > > What do you think? Any pros/cons/thoughts? Great idea. We did something similar from support side with https://citellus.org not just for upgrades but also for identifying ongoing issues also from sosreports. Wes did a POC at https://review.openstack.org/#/c/553571/ for integrating it too. So if we could even reuse them somehow, that will be great. Thanks! Pablo > > Cheers, > > C. -- Pablo Iranzo Gómez (Pablo.Iranzo at redhat.com) GnuPG: 0x5BD8E1E4 Senior Software Maintenance Engineer - OpenStack RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA #110-215-852 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From deepthi.v.v at ericsson.com Tue Jul 17 05:38:29 2018 From: deepthi.v.v at ericsson.com (Deepthi V V) Date: Tue, 17 Jul 2018 05:38:29 +0000 Subject: [openstack-dev] [oslo-reports][neutron] GMR for neutron Message-ID: Hi, I am trying to get Guru Meditation Report from NEUTRON child process. I have tried below scenarios. GMR was generated only for the parent process in all the cases. 1. Kill -SIGUSR2 2. Kill -SIGUSR2 How can I get GMR for a child process. I assumed option 1 would give GMR for parent process and all its child processes. Is the assumption wrong? Thanks, Deepthi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue Jul 17 05:42:01 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 17 Jul 2018 07:42:01 +0200 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: References: Message-ID: <259217c2-9512-025f-3088-88b4dcba813e@redhat.com> On 07/17/2018 06:57 AM, Udi Kalifon wrote: > We should also add support for the openstack client to launch the other > validators that are used in the GUI. There are validators for the > overcloud as well, and new validators are added all the time. > > These validators are installed under > /usr/share/openstack-tripleo-validations/validations/ and they're > launched by the command: > ansible-playbook -i /usr/bin/tripleo-ansible-inventory > /usr/share/openstack-tripleo-validations/validations/<> Hey, funky - I'm currently adding the support for ansible-playbook (in an "easy, fast and pre-step" way) to the tripleoclient in order to be able to run validations from that very same location: https://review.openstack.org/582917 Guess we're on the same track :). > > Cedric, feel free to open an RFE. Will do once we have the full scope :). > > > > > Regards, > Udi Kalifon; Senior QE; RHOS-UIAutomation > > > On Mon, Jul 16, 2018 at 6:32 PM, Dan Prince > wrote: > > On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret > > wrote: > > > > Dear Stackers, > > > > In order to let operators properly validate their undercloud node, I > > propose to create a new subcommand in the "openstack undercloud" "tree": > > `openstack undercloud validate' > > > > This should only run the different validations we have in the > > undercloud_preflight.py¹ > > That way, an operator will be able to ensure all is valid before > > starting "for real" any other command like "install" or "upgrade". > > > > Of course, this "validate" step is embedded in the "install" and > > "upgrade" already, but having the capability to just validate without > > any further action is something that can be interesting, for example: > > > > - ensure the current undercloud hardware/vm is sufficient for an update > > - ensure the allocated VM for the undercloud is sufficient for a deploy > > - and so on > > > > There are probably other possibilities, if we extend the "validation" > > scope outside the "undercloud" (like, tripleo, allinone, even overcloud). > > > > What do you think? Any pros/cons/thoughts? > > I think this command could be very useful. I'm assuming the underlying > implementation would call a 'heat stack-validate' using an ephemeral > heat-all instance. If so way we implement it for the undercloud vs the > 'standalone' use case would likely be a bit different. We can probably > subclass the implementations to share common code across the efforts > though. > > For the undercloud you are likely to have a few extra 'local only' > validations. Perhaps extra checks for things on the client side. > > For the all-in-one I had envisioned using the output from the 'heat > stack-validate' to create a sample config file for a custom set of > services. Similar to how tools like Packstack generate a config file > for example. > > Dan > > > > > Cheers, > > > > C. > > > > > > > > ¹ > > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py > > > -- > > Cédric Jeanneret > > Software Engineer > > DFG:DF > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From strigazi at gmail.com Tue Jul 17 06:57:25 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 17 Jul 2018 08:57:25 +0200 Subject: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer Message-ID: Hello list, I'm excited to nominate Feilong as Core Reviewer for the Magnum project. Feilong has contributed many features like Calico as an alternative CNI for kubernetes, make coredns scale proportionally to the cluster, improved admin operations on clusters and improved multi-master deployments. Apart from contributing to the project he has been contributing to other projects like gophercloud and shade, he has been very helpful with code reviews and he tests and reviews all patches that are coming in. Finally, he is very responsive on IRC and in the ML. Thanks for all your contributions Feilong, I'm looking forward to working with you more! Cheers, Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Tue Jul 17 07:04:55 2018 From: ykarel at redhat.com (Yatin Karel) Date: Tue, 17 Jul 2018 12:34:55 +0530 Subject: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer In-Reply-To: References: Message-ID: +2 Well deserved. Welcome Feilong and Thanks for all the Great Work!!! Regards Yatin Karel On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis wrote: > Hello list, > > I'm excited to nominate Feilong as Core Reviewer for the Magnum project. > > Feilong has contributed many features like Calico as an alternative CNI for > kubernetes, make coredns scale proportionally to the cluster, improved > admin operations on clusters and improved multi-master deployments. Apart > from contributing to the project he has been contributing to other projects > like gophercloud and shade, he has been very helpful with code reviews > and he tests and reviews all patches that are coming in. Finally, he is very > responsive on IRC and in the ML. > > Thanks for all your contributions Feilong, I'm looking forward to working > with > you more! > > Cheers, > Spyros > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lijie at unitedstack.com Tue Jul 17 07:24:34 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 17 Jul 2018 15:24:34 +0800 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716133237.GA19698@sm-workstation> References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, unsatisfactory results.Sometimes it's IOPS is twice as bad,could you show me your test data?Thank you! Cheers, Rambo ------------------ Original ------------------ From: "Sean McGinnis"; Date: 2018年7月16日(星期一) 晚上9:32 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > On 16/07, Rambo wrote: > > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > > > > Hi, > > I'm sure the community will be happy to merge the driver back into the > repository. > The other reason for its removal was its inability to meet the minimum feature set required for Cinder drivers along with benchmarks showing the LVM and iSCSI driver could be tweaked to have similar or better performance. The other option would be to not use Cinder volumes so you just use local storage on your compute nodes. Readding the block device driver is not likely an option. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jul 17 07:36:56 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 17 Jul 2018 08:36:56 +0100 Subject: [openstack-dev] [neutron] How to look up a project name from Neutron server code? Message-ID: Can someone help me with how to look up a project name (aka tenant name) for a known project/tenant ID, from code (specifically a mechanism driver) running in the Neutron server? I believe that means I need to make a GET REST call as here: https://developer.openstack.org/api-ref/identity/v3/index.html#projects. But I don't yet understand how a piece of Neutron server code can ensure that it has the right credentials to do that. If someone happens to have actual code for doing this, I'm sure that would be very helpful. (I'm aware that whenever the Neutron server processes an API request, the project name for the project that generated that request is added into the request context. That is great when my code is running in an API request context. But there are other times when the code isn't in a request context and still needs to map from a project ID to project name; hence the question here.) Many thanks, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From ukalifon at redhat.com Tue Jul 17 07:53:00 2018 From: ukalifon at redhat.com (Udi Kalifon) Date: Tue, 17 Jul 2018 10:53:00 +0300 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: <259217c2-9512-025f-3088-88b4dcba813e@redhat.com> References: <259217c2-9512-025f-3088-88b4dcba813e@redhat.com> Message-ID: I opened this RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1601739 Regards, Udi Kalifon; Senior QE; RHOS-UI Automation On Tue, Jul 17, 2018 at 8:42 AM, Cédric Jeanneret wrote: > > > On 07/17/2018 06:57 AM, Udi Kalifon wrote: > > We should also add support for the openstack client to launch the other > > validators that are used in the GUI. There are validators for the > > overcloud as well, and new validators are added all the time. > > > > These validators are installed under > > /usr/share/openstack-tripleo-validations/validations/ and they're > > launched by the command: > > ansible-playbook -i /usr/bin/tripleo-ansible-inventory > > /usr/share/openstack-tripleo-validations/validations/<< > validator-name.py>> > > Hey, funky - I'm currently adding the support for ansible-playbook (in > an "easy, fast and pre-step" way) to the tripleoclient in order to be > able to run validations from that very same location: > https://review.openstack.org/582917 > > Guess we're on the same track :). > > > > > Cedric, feel free to open an RFE. > > Will do once we have the full scope :). > > > > > > > > > > > Regards, > > Udi Kalifon; Senior QE; RHOS-UIAutomation > > > > > > On Mon, Jul 16, 2018 at 6:32 PM, Dan Prince > > wrote: > > > > On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret > > > wrote: > > > > > > Dear Stackers, > > > > > > In order to let operators properly validate their undercloud node, > I > > > propose to create a new subcommand in the "openstack undercloud" > "tree": > > > `openstack undercloud validate' > > > > > > This should only run the different validations we have in the > > > undercloud_preflight.py¹ > > > That way, an operator will be able to ensure all is valid before > > > starting "for real" any other command like "install" or "upgrade". > > > > > > Of course, this "validate" step is embedded in the "install" and > > > "upgrade" already, but having the capability to just validate > without > > > any further action is something that can be interesting, for > example: > > > > > > - ensure the current undercloud hardware/vm is sufficient for an > update > > > - ensure the allocated VM for the undercloud is sufficient for a > deploy > > > - and so on > > > > > > There are probably other possibilities, if we extend the > "validation" > > > scope outside the "undercloud" (like, tripleo, allinone, even > overcloud). > > > > > > What do you think? Any pros/cons/thoughts? > > > > I think this command could be very useful. I'm assuming the > underlying > > implementation would call a 'heat stack-validate' using an ephemeral > > heat-all instance. If so way we implement it for the undercloud vs > the > > 'standalone' use case would likely be a bit different. We can > probably > > subclass the implementations to share common code across the efforts > > though. > > > > For the undercloud you are likely to have a few extra 'local only' > > validations. Perhaps extra checks for things on the client side. > > > > For the all-in-one I had envisioned using the output from the 'heat > > stack-validate' to create a sample config file for a custom set of > > services. Similar to how tools like Packstack generate a config file > > for example. > > > > Dan > > > > > > > > Cheers, > > > > > > C. > > > > > > > > > > > > ¹ > > > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/ > tripleoclient/v1/undercloud_preflight.py > > tripleoclient/v1/undercloud_preflight.py> > > > -- > > > Cédric Jeanneret > > > Software Engineer > > > DFG:DF > > > > > > > > ____________________________________________________________ > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > unsubscribe> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Tue Jul 17 08:09:47 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 17 Jul 2018 11:09:47 +0300 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: Rambo, Did you try to use LVM+LIO target driver? It shows pretty good performance comparing to BlockDeviceDriver, Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 17, 2018 at 10:24 AM, Rambo wrote: > Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is > not a viable option - benchmarked them several times, unsatisfactory > results.Sometimes it's IOPS is twice as bad,could you show me your test > data?Thank you! > > > > Cheers, > Rambo > > > ------------------ Original ------------------ > *From:* "Sean McGinnis"; > *Date:* 2018年7月16日(星期一) 晚上9:32 > *To:* "OpenStack Developmen"; > *Subject:* Re: [openstack-dev] [cinder] about block device driver > > On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > > On 16/07, Rambo wrote: > > > Well,in my opinion,the BlockDeviceDriver is more suitable than any > other solution for data processing scenarios.Does the community will agree > to merge the BlockDeviceDriver to the Cinder repository again if our > company hold the maintainer and CI? > > > > > > > Hi, > > > > I'm sure the community will be happy to merge the driver back into the > > repository. > > > > The other reason for its removal was its inability to meet the minimum > feature > set required for Cinder drivers along with benchmarks showing the LVM and > iSCSI > driver could be tweaked to have similar or better performance. > > The other option would be to not use Cinder volumes so you just use local > storage on your compute nodes. > > Readding the block device driver is not likely an option. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Jul 17 08:52:34 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 17 Jul 2018 16:52:34 +0800 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: yes,My cinder driver is LVM+LIO.I have upload the test result in appendix.Can you show me your test results?Thank you! ------------------ Original ------------------ From: "Ivan Kolodyazhny"; Date: Tue, Jul 17, 2018 04:09 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver Rambo, Did you try to use LVM+LIO target driver? It shows pretty good performance comparing to BlockDeviceDriver, Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 17, 2018 at 10:24 AM, Rambo wrote: Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, unsatisfactory results.Sometimes it's IOPS is twice as bad,could you show me your test data?Thank you! Cheers, Rambo ------------------ Original ------------------ From: "Sean McGinnis"; Date: 2018年7月16日(星期一) 晚上9:32 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > On 16/07, Rambo wrote: > > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > > > > Hi, > > I'm sure the community will be happy to merge the driver back into the > repository. > The other reason for its removal was its inability to meet the minimum feature set required for Cinder drivers along with benchmarks showing the LVM and iSCSI driver could be tweaked to have similar or better performance. The other option would be to not use Cinder volumes so you just use local storage on your compute nodes. Readding the block device driver is not likely an option. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 99BB7964 at 8738C509.52AE4D5B.png Type: application/octet-stream Size: 22282 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test2639.png Type: application/octet-stream Size: 22282 bytes Desc: not available URL: From e0ne at e0ne.info Tue Jul 17 09:00:17 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 17 Jul 2018 12:00:17 +0300 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: Do you use the volumes on the same nodes where instances are located? Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 17, 2018 at 11:52 AM, Rambo wrote: > yes,My cinder driver is LVM+LIO.I have upload the test result in > appendix.Can you show me your test results?Thank you! > > > > ------------------ Original ------------------ > *From: * "Ivan Kolodyazhny"; > *Date: * Tue, Jul 17, 2018 04:09 PM > *To: * "OpenStack Developmen"; > *Subject: * Re: [openstack-dev] [cinder] about block device driver > > Rambo, > > Did you try to use LVM+LIO target driver? It shows pretty good performance > comparing to BlockDeviceDriver, > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Tue, Jul 17, 2018 at 10:24 AM, Rambo wrote: > >> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is >> not a viable option - benchmarked them several times, unsatisfactory >> results.Sometimes it's IOPS is twice as bad,could you show me your test >> data?Thank you! >> >> >> >> Cheers, >> Rambo >> >> >> ------------------ Original ------------------ >> *From:* "Sean McGinnis"; >> *Date:* 2018年7月16日(星期一) 晚上9:32 >> *To:* "OpenStack Developmen"; >> *Subject:* Re: [openstack-dev] [cinder] about block device driver >> >> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: >> > On 16/07, Rambo wrote: >> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any >> other solution for data processing scenarios.Does the community will agree >> to merge the BlockDeviceDriver to the Cinder repository again if our >> company hold the maintainer and CI? >> > > >> > >> > Hi, >> > >> > I'm sure the community will be happy to merge the driver back into the >> > repository. >> > >> >> The other reason for its removal was its inability to meet the minimum >> feature >> set required for Cinder drivers along with benchmarks showing the LVM and >> iSCSI >> driver could be tweaked to have similar or better performance. >> >> The other option would be to not use Cinder volumes so you just use local >> storage on your compute nodes. >> >> Readding the block device driver is not likely an option. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 99BB7964 at 8738C509.52AE4D5B.png Type: application/octet-stream Size: 22282 bytes Desc: not available URL: From eumel at arcor.de Tue Jul 17 09:07:16 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 17 Jul 2018 11:07:16 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4CB93F.6070202@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> Message-ID: Hi Jimmy, permission was added for you and Sebastian. The Container Whitepaper is on the Zanata frontpage now. But we removed Edge Computing whitepaper last week because there is a kind of displeasure in the team since the results of translation are still not published beside Chinese version. It would be nice if we have a commitment from the Foundation that results are published in a specific timeframe. This includes your requirements until the translation should be available. thx Frank Am 2018-07-16 17:26, schrieb Jimmy McArthur: > Sorry, I should have also added... we additionally need permissions so > that we can add the a new version of the pot file to this project: > https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 > > Thanks! > Jimmy > > > > Jimmy McArthur wrote: >> Hi all - >> >> We have both of the current whitepapers up and available for >> translation. Can we promote these on the Zanata homepage? >> >> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> Thanks all! >> Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lijie at unitedstack.com Tue Jul 17 09:24:00 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 17 Jul 2018 17:24:00 +0800 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: <20180716092027.pc43radmozdgndd5@localhost> <20180716113226.hdqpzfkeyjkpx5kn@localhost> <20180716133237.GA19698@sm-workstation> Message-ID: yes ------------------ Original ------------------ From: "Ivan Kolodyazhny"; Date: Tue, Jul 17, 2018 05:00 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver Do you use the volumes on the same nodes where instances are located? Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 17, 2018 at 11:52 AM, Rambo wrote: yes,My cinder driver is LVM+LIO.I have upload the test result in appendix.Can you show me your test results?Thank you! ------------------ Original ------------------ From: "Ivan Kolodyazhny"; Date: Tue, Jul 17, 2018 04:09 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver Rambo, Did you try to use LVM+LIO target driver? It shows pretty good performance comparing to BlockDeviceDriver, Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Jul 17, 2018 at 10:24 AM, Rambo wrote: Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a viable option - benchmarked them several times, unsatisfactory results.Sometimes it's IOPS is twice as bad,could you show me your test data?Thank you! Cheers, Rambo ------------------ Original ------------------ From: "Sean McGinnis"; Date: 2018年7月16日(星期一) 晚上9:32 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about block device driver On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote: > On 16/07, Rambo wrote: > > Well,in my opinion,the BlockDeviceDriver is more suitable than any other solution for data processing scenarios.Does the community will agree to merge the BlockDeviceDriver to the Cinder repository again if our company hold the maintainer and CI? > > > > Hi, > > I'm sure the community will be happy to merge the driver back into the > repository. > The other reason for its removal was its inability to meet the minimum feature set required for Cinder drivers along with benchmarks showing the LVM and iSCSI driver could be tweaked to have similar or better performance. The other option would be to not use Cinder volumes so you just use local storage on your compute nodes. Readding the block device driver is not likely an option. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: D7C81B68 at 5B350B78.B0B54D5B Type: application/octet-stream Size: 22282 bytes Desc: not available URL: From anlin.kong at gmail.com Tue Jul 17 11:10:24 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 17 Jul 2018 23:10:24 +1200 Subject: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer In-Reply-To: References: Message-ID: Huge +1 Cheers, Lingxian Kong On Tue, Jul 17, 2018 at 7:04 PM, Yatin Karel wrote: > +2 Well deserved. > > Welcome Feilong and Thanks for all the Great Work!!! > > > Regards > Yatin Karel > > On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis > wrote: > > Hello list, > > > > I'm excited to nominate Feilong as Core Reviewer for the Magnum project. > > > > Feilong has contributed many features like Calico as an alternative CNI > for > > kubernetes, make coredns scale proportionally to the cluster, improved > > admin operations on clusters and improved multi-master deployments. Apart > > from contributing to the project he has been contributing to other > projects > > like gophercloud and shade, he has been very helpful with code reviews > > and he tests and reviews all patches that are coming in. Finally, he is > very > > responsive on IRC and in the ML. > > > > Thanks for all your contributions Feilong, I'm looking forward to working > > with > > you more! > > > > Cheers, > > Spyros > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue Jul 17 11:17:52 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 17 Jul 2018 13:17:52 +0200 Subject: [openstack-dev] [nova]Notification update week 29 Message-ID: <1531826273.16131.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Low] Notification sending sometimes hits the keystone API to get glance endpoints https://bugs.launchpad.net/nova/+bug/1757407 Need a final look from Matt but otherwise ready https://review.openstack.org/#/c/564528/ No new bugs tagged with notifications and no progress with the existing ones. Features -------- Versioned notification transformation ------------------------------------- Good progress last week thanks to Takashi and Matt. We still have two patches on the topic that only needs a second core to look at: https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky Weekly meeting -------------- No meeting this week. Please ping me on IRC if you have something important to talk about. Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Jul 17 13:34:03 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 17 Jul 2018 08:34:03 -0500 Subject: [openstack-dev] OpenStack Summit Berlin CFP Deadline Today Message-ID: Hi everyone, The CFP for the OpenStack Summit Berlin closes July 17 at 11:59pm PST (July 18 at 6:59am UTC), so make sure to press submit on your talks for: • CI/CD • Container Infrastructure • Edge Computing • Hands-on Workshops • HPC / GPU / AI • Open Source Community • Private & Hybrid Cloud • Public Cloud • Telecom & NFV SUBMIT HERE Register for the Summit - Early Bird pricing ends August 21 Become a Sponsor If you have any questions, please email summit at openstack.org . Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Jul 17 14:01:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 17 Jul 2018 09:01:24 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> Message-ID: <5B4DF6B4.9030501@openstack.org> Frank, I'm sorry to hear about the displeasure around the Edge paper. As mentioned in a prior thread, the RST format that Akihiro worked did not work with the Zanata process that we have been using with our CMS. Additionally, the existing EDGE page is a PDF, so we had to build a new template to work with the new HTML whitepaper layout we created for the Containers paper. I outlined this in the thread " [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing Whitepaper Translation" on 6/25/18 and mentioned we would be ready with the template around 7/13. We completed the work on the new whitepaper template and then put out the pot files on Zanata so we can get the po language files back. If this process is too cumbersome for the translation team, I'm open to discussion, but right now our entire translation process is based on the official OpenStack Docs translation process outlined by the i18n team: https://docs.openstack.org/i18n/latest/en_GB/tools.html Again, I realize Akihiro put in some work on his own proposing the new translation type. If the i18n team is moving to this format instead, we can work on redoing our process. Please let me know if I can clarify further. Thanks, Jimmy Frank Kloeker wrote: > Hi Jimmy, > > permission was added for you and Sebastian. The Container Whitepaper > is on the Zanata frontpage now. But we removed Edge Computing > whitepaper last week because there is a kind of displeasure in the > team since the results of translation are still not published beside > Chinese version. It would be nice if we have a commitment from the > Foundation that results are published in a specific timeframe. This > includes your requirements until the translation should be available. > > thx Frank > > Am 2018-07-16 17:26, schrieb Jimmy McArthur: >> Sorry, I should have also added... we additionally need permissions so >> that we can add the a new version of the pot file to this project: >> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> >> >> Thanks! >> Jimmy >> >> >> >> Jimmy McArthur wrote: >>> Hi all - >>> >>> We have both of the current whitepapers up and available for >>> translation. Can we promote these on the Zanata homepage? >>> >>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>> Thanks all! >>> Jimmy >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Tue Jul 17 14:44:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 17 Jul 2018 16:44:09 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> Message-ID: <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> Finally found the time to properly read this... Zane Bitter wrote: > [...] > We chose to add features to Nova to compete with vCenter/oVirt, and not > to add features the would have enabled OpenStack as a whole to compete > with more than just the compute provisioning subset of EC2/Azure/GCP. Could you give an example of an EC2 action that would be beyond the "compute provisioning subset" that you think we should have built into Nova ? > Meanwhile, the other projects in OpenStack were working on building the > other parts of an AWS/Azure/GCP competitor. And our vague one-sentence > mission statement allowed us all to maintain the delusion that we were > all working on the same thing and pulling in the same direction, when in > truth we haven't been at all. Do you think that organizing (tying) our APIs along [micro]services, rather than building a sanely-organized user API on top of a sanely-organized set of microservices, played a role in that divide ? > We can decide that we want to be one, or the other, or both. But if we > don't all decide together then a lot of us are going to continue wasting > our time working at cross-purposes. If you are saying that we should choose between being vCenter or AWS, I would definitely say the latter. But I'm still not sure I see this issue in such a binary manner. Imagine if (as suggested above) we refactored the compute node and give it a user API, would that be one, the other, both ? Or just a sane addition to improve what OpenStack really is today: a set of open infrastructure components providing different services with each their API, with slight gaps and overlaps between them ? Personally, I'm not very interested in discussing what OpenStack could have been if we started building it today. I'm much more interested in discussing what to add or change in order to make it usable for more use cases while continuing to serve the needs of our existing users. And I'm not convinced that's an either/or choice... -- Thierry Carrez (ttx) From jaypipes at gmail.com Tue Jul 17 14:55:25 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 17 Jul 2018 10:55:25 -0400 Subject: [openstack-dev] [neutron] How to look up a project name from Neutron server code? In-Reply-To: References: Message-ID: On 07/17/2018 03:36 AM, Neil Jerram wrote: > Can someone help me with how to look up a project name (aka tenant name) > for a known project/tenant ID, from code (specifically a mechanism > driver) running in the Neutron server? > > I believe that means I need to make a GET REST call as here: > https://developer.openstack.org/api-ref/identity/v3/index.html#projects.  But > I don't yet understand how a piece of Neutron server code can ensure > that it has the right credentials to do that.  If someone happens to > have actual code for doing this, I'm sure that would be very helpful. > > (I'm aware that whenever the Neutron server processes an API request, > the project name for the project that generated that request is added > into the request context.  That is great when my code is running in an > API request context.  But there are other times when the code isn't in a > request context and still needs to map from a project ID to project > name; hence the question here.) Hi Neil, You basically answered your own question above :) The neutron request context gets built from oslo.context's Context.from_environ() [1] which has this note in the implementation [2]: # Load a new context object from the environment variables set by # auth_token middleware. See: # https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. If you don't have access to a HTTP headers, then you'll need to pass some context object/struct to the code you're referring to. Might as well pass the neutron RequestContext (derived from oslo_context.Context) to the code you're referring to and you get all this for free. Best, -jay [1] https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424 [2] https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435 From ianyrchoi at gmail.com Tue Jul 17 15:09:34 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 18 Jul 2018 00:09:34 +0900 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4DF6B4.9030501@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> Message-ID: <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> Hello, When I saw overall translation source strings on container whitepaper, I would infer that new edge computing whitepaper source strings would include HTML markup tags. On the other hand, the source strings of edge computing whitepaper which I18n team previously translated do not include HTML markup tags, since the source strings are based on just text format. I really appreciate Akihiro's work on RST-based support on publishing translated edge computing whitepapers, since translators do not have to re-translate all the strings. On the other hand, it seems that I18n team needs to investigate on translating similar strings of HTML-based edge computing whitepaper source strings, which would discourage translators. That's my point of view on translating edge computing whitepaper. For translating container whitepaper, I want to further ask the followings since *I18n-based tools* would mean for translators that translators can test and publish translated whitepapers locally: - How to build translated container whitepaper using original Silverstripe-based repository?   https://docs.openstack.org/i18n/latest/tools.html describes well how to build translated artifacts for RST-based OpenStack repositories   but I could not find the way how to build translated container whitepaper with translated resources on Zanata. With many thanks, /Ian Jimmy McArthur wrote on 7/17/2018 11:01 PM: > Frank, > > I'm sorry to hear about the displeasure around the Edge paper.  As > mentioned in a prior thread, the RST format that Akihiro worked did > not work with the  Zanata process that we have been using with our > CMS.  Additionally, the existing EDGE page is a PDF, so we had to > build a new template to work with the new HTML whitepaper layout we > created for the Containers paper. I outlined this in the thread " > [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing > Whitepaper Translation" on 6/25/18 and mentioned we would be ready > with the template around 7/13. > > We completed the work on the new whitepaper template and then put out > the pot files on Zanata so we can get the po language files back. If > this process is too cumbersome for the translation team, I'm open to > discussion, but right now our entire translation process is based on > the official OpenStack Docs translation process outlined by the i18n > team: https://docs.openstack.org/i18n/latest/en_GB/tools.html > > Again, I realize Akihiro put in some work on his own proposing the new > translation type. If the i18n team is moving to this format instead, > we can work on redoing our process. > > Please let me know if I can clarify further. > > Thanks, > Jimmy > > Frank Kloeker wrote: >> Hi Jimmy, >> >> permission was added for you and Sebastian. The Container Whitepaper >> is on the Zanata frontpage now. But we removed Edge Computing >> whitepaper last week because there is a kind of displeasure in the >> team since the results of translation are still not published beside >> Chinese version. It would be nice if we have a commitment from the >> Foundation that results are published in a specific timeframe. This >> includes your requirements until the translation should be available. >> >> thx Frank >> >> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>> Sorry, I should have also added... we additionally need permissions so >>> that we can add the a new version of the pot file to this project: >>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>> >>> >>> Thanks! >>> Jimmy >>> >>> >>> >>> Jimmy McArthur wrote: >>>> Hi all - >>>> >>>> We have both of the current whitepapers up and available for >>>> translation.  Can we promote these on the Zanata homepage? >>>> >>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>> Thanks all! >>>> Jimmy >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Tue Jul 17 15:22:13 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 17 Jul 2018 10:22:13 -0500 Subject: [openstack-dev] [Zun] Zun UI questions In-Reply-To: References: Message-ID: Hongbin, I just wanted to follow-up and let a fresh install of the most recent release is working with no errors. Thanks so much for your assistance! Amy (spotz) On Sun, Jul 15, 2018 at 10:49 AM, Hongbin Lu wrote: > Hi Amy, > > The wrong Keystone URI might be due to the an issue of the devstack > plugins. I have proposed fixes [1] [2] for that. Thanks for the suggestion > about adding a note for uninstalling pip packages. I have created a ticket > [3] for that. > > [1] https://review.openstack.org/#/c/582799/ > [2] https://review.openstack.org/#/c/582800/ > [3] https://bugs.launchpad.net/zun/+bug/1781807 > > Best regards, > Hongbin > > On Sun, Jul 15, 2018 at 10:16 AM Amy Marrich wrote: > >> Hongbin, >> >> Doing the pip uninstall did the trick with the Flask version, when >> running another debug I did notice an incorrect IP for the Keystone URI and >> have restarted the machines networking and cleaned up the /etc/hosts. >> >> When doing a second stack, I did need to uninstall the pip packages again >> for the second stack.sh to complete, might be worth adding this to the docs >> as a note if people have issues. Second install still had the wrong IP >> showing as the Keystone URI, I'll try s fresh machine install next. >> >> Thanks for all your help! >> >> Amy (spotz) >> >> On Sat, Jul 14, 2018 at 9:42 PM, Hongbin Lu wrote: >> >>> Hi Amy, >>> >>> Today, I created a fresh VM with Ubuntu16.04 and run ./stack.sh with >>> your local.conf, but I couldn't reproduce the two issues you mentioned (the >>> Flask version conflict issue and 401 issue). By analyzing the logs you >>> provided, it seems some python packages in your machine are pretty old. >>> First, could you paste me the output of "pip freeze". Second, if possible, >>> I would suggest to remove all the python packages and re-stack again as >>> following: >>> >>> * Run ./unstack >>> * Run ./clean.sh >>> * Run pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y >>> * Run ./stack >>> >>> Please let us know if above steps still don't work. >>> >>> Best regards, >>> Hongbin >>> >>> On Sat, Jul 14, 2018 at 5:15 PM Amy Marrich wrote: >>> >>>> Hongbin, >>>> >>>> This was a fresh install from master this week >>>> >>>> commit 6312db47e9141acd33142ae857bdeeb92c59994e >>>> >>>> Merge: ef35713 2742875 >>>> >>>> Author: Zuul >>>> >>>> Date: Wed Jul 11 20:36:12 2018 +0000 >>>> >>>> >>>> Merge "Cleanup keystone's removed config options" >>>> >>>> Except for builds with my patching kuryr-libnetwork locally builds have >>>> been done with reclone and fresh /opt/stack directories. Patch has been >>>> submitted for the Flask issue >>>> >>>> https://review.openstack.org/582634 but hasn't passed the gates yet. >>>> >>>> >>>> Following the instructions above on a new pull of devstack: >>>> >>>> commit 3b5477d6356a62d7d64a519a4b1ac99309d251c0 >>>> >>>> Author: OpenStack Proposal Bot >>>> >>>> Date: Thu Jul 12 06:17:32 2018 +0000 >>>> >>>> Updated from generate-devstack-plugins-list >>>> >>>> Change-Id: I8f702373c76953a0a29285f410d368c975ba4024 >>>> >>>> >>>> I'm still able to use the openstack CLI for non-Zun commands but 401 on >>>> Zun >>>> >>>> root at zunui:~# openstack service list >>>> >>>> +----------------------------------+------------------+----- >>>> -------------+ >>>> >>>> | ID | Name | Type >>>> | >>>> >>>> +----------------------------------+------------------+----- >>>> -------------+ >>>> >>>> | 06be414af2fd4d59af8de0ccff78149e | placement | placement >>>> | >>>> >>>> | 0df1832d6f8c4a5aa7b5e8bacf7339f8 | nova | compute >>>> | >>>> >>>> | 3f1b2692a184443c85b631fa7acf714d | heat-cfn | cloudformation >>>> | >>>> >>>> | 3f6bcbb75f684041bf6eeaaf5ab4c14b | cinder | block-storage >>>> | >>>> >>>> | 6e06ac1394ee4872aa134081d190f18e | neutron | network >>>> | >>>> >>>> | 76afda8ecd18474ba382dbb4dc22b4bb | kuryr-libnetwork | >>>> kuryr-libnetwork | >>>> >>>> | 7b336b8b9b9c4f6bbcc5fa6b9400ccaf | cinderv3 | volumev3 >>>> | >>>> >>>> | a0f83f30276d45e2bd5fd14ff8410380 | nova_legacy | compute_legacy >>>> | >>>> >>>> | a12600a2467141ff89a406ec3b50bacb | cinderv2 | volumev2 >>>> | >>>> >>>> | d5bfb92a244b4e7888cae28ca6b2bbac | keystone | identity >>>> | >>>> >>>> | d9ea196e9cae4b0691f6c4b619eb47c9 | zun | container >>>> | >>>> >>>> | e528282e291f4ddbaaac6d6c82a0036e | cinder | volume >>>> | >>>> >>>> | e6078b2c01184f88a784b390f0b28263 | glance | image >>>> | >>>> >>>> | e650be6c67ac4e5c812f2a4e4cca2544 | heat | orchestration >>>> | >>>> >>>> +----------------------------------+------------------+----- >>>> -------------+ >>>> >>>> root at zunui:~# openstack appcontainer list >>>> >>>> Unauthorized (HTTP 401) (Request-ID: req-e44f5caf-642c-4435-ab1d- >>>> 98feae1fada9) >>>> >>>> root at zunui:~# zun list >>>> >>>> ERROR: Unauthorized (HTTP 401) (Request-ID: req-587e39d6-463f-4921-b45b- >>>> 29576a00c242) >>>> >>>> >>>> Thanks, >>>> >>>> >>>> Amy (spotz) >>>> >>>> >>>> >>>> On Fri, Jul 13, 2018 at 10:34 PM, Hongbin Lu >>>> wrote: >>>> >>>>> Hi Amy, >>>>> >>>>> First, I want to confirm which version of devstack you were using? (go >>>>> to the devstack folder and type "git log -1"). >>>>> >>>>> If possible, I would suggest to do the following steps: >>>>> >>>>> * Run ./unstack >>>>> * Run ./clean >>>>> * Pull down the latest version of devstack (if it is too old) >>>>> * Pull down the latest version of all the projects under /opt/stack/ >>>>> * Run ./stack >>>>> >>>>> If above steps couldn't resolve the problem, please let me know. >>>>> >>>>> Best regards, >>>>> Hongbin >>>>> >>>>> >>>>> On Fri, Jul 13, 2018 at 10:33 AM Amy Marrich wrote: >>>>> >>>>>> Hongbin, >>>>>> >>>>>> Let me know if you still want me to mail the dev list, but here are >>>>>> the gists for the installations and the broken CLI I mentioned >>>>>> >>>>>> local.conf - which is basically the developer quickstart instructions >>>>>> for Zun >>>>>> >>>>>> https://gist.github.com/spotz/69c5cfa958b233b4c3d232bbfcc451ea >>>>>> >>>>>> >>>>>> This is the failure with a fresh devstack installation >>>>>> >>>>>> https://gist.github.com/spotz/14e19b8a3e0b68b7db2f96bff7fdf4a8 >>>>>> >>>>>> >>>>>> Requirements repo change a few weeks ago >>>>>> >>>>>> http://git.openstack.org/cgit/openstack/requirements/commit/?id= >>>>>> cb6c00c01f82537a38bd0c5a560183735cefe2f9 >>>>>> >>>>>> >>>>>> Changed local Flask version for curry-libnetwork and set local.conf >>>>>> to reclone=no and then installed and tried to use the CLI. >>>>>> >>>>>> https://gist.github.com/spotz/b53d729fc72d24b4454ee55519e72c07 >>>>>> >>>>>> >>>>>> It makes sense that Flask would cause an issue on the UI installation >>>>>> even though it's enabled even for a non-enabled build according to the >>>>>> quickstart doc. I don't mind doing a patch to fix kuryr-libnetwork to bring >>>>>> it up to the current requirements. I don't however know where to start >>>>>> troubleshooting the 401 issue. On a different machine I have decstack with >>>>>> Zun but no zun-ui and the CLI responds correctly. >>>>>> >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Amy (spotz) >>>>>> >>>>>> >>>>>> On Thu, Jul 12, 2018 at 11:21 PM, Hongbin Lu >>>>>> wrote: >>>>>> >>>>>>> Hi Amy, >>>>>>> >>>>>>> I am also in doubts about the Flask version issue. Perhaps you can >>>>>>> provide more details about this issue? Do you see any error message? >>>>>>> >>>>>>> Best regards, >>>>>>> Hongbin >>>>>>> >>>>>>> On Thu, Jul 12, 2018 at 10:49 PM Shu M. wrote: >>>>>>> >>>>>>>> >>>>>>>> Hi Amy, >>>>>>>> >>>>>>>> Thank you for sharing the issues. Zun UI does not require >>>>>>>> kuryr-libnetwork directly, and keystone seems to have same requirements for >>>>>>>> Flask. So I wonder why install failure occurred by Zun UI. >>>>>>>> >>>>>>>> Could you share your correction for requrements. >>>>>>>> >>>>>>>> Unfortunately, I'm in trouble on my development environment since >>>>>>>> yesterday. So I can not investigate the issues quickly. >>>>>>>> I added Hongbin to this topic, he would help us. >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Shu Muto >>>>>>>> >>>>>>>> 2018年7月13日(金) 9:29 Amy Marrich : >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I was given your email on the #openstack-zun channel as a source >>>>>>>>> for questions for the UI. I've found a few issues installing the Master >>>>>>>>> branch on devstack and not sure if they should be bugged. >>>>>>>>> >>>>>>>>> kuryr-libnetwork has incorrect versions for Flask in both >>>>>>>>> lower-constraints.txt and requirements.txt, this only affects installation >>>>>>>>> when enabling zun-ui, I'll be more then happy to bug and patch it, if >>>>>>>>> confirmed as an issue. >>>>>>>>> >>>>>>>>> Once correcting the requirements locally to complete the devstack >>>>>>>>> installation, I'm receiving 401s when using both the OpenStack CLI and Zun >>>>>>>>> client. I'm also unable to create a container within Horizon. The same >>>>>>>>> credentials work fine for other OpenStack commands. >>>>>>>>> >>>>>>>>> On another server without the ui enabled I can use both the CLI >>>>>>>>> and client no issues. I'm not sure if there's something missing on >>>>>>>>> https://docs.openstack.org/zun/latest/contributor/quickstart.html >>>>>>>>> or some other underlying issue. >>>>>>>>> >>>>>>>>> Any help or thoughts appreciated! >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Amy (spotz) >>>>>>>>> >>>>>>>>> >>>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcoufal at redhat.com Tue Jul 17 16:01:21 2018 From: jcoufal at redhat.com (Jaromir Coufal) Date: Tue, 17 Jul 2018 18:01:21 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: <20180716180719.GB4445@palahniuk.int.rhx> Message-ID: <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> Not rooting for any approach here, just want to add a bit of factors which might play a role when deciding which way to go: A) Performance matters, we should be improving simplicity and speed of deployments rather than making it heavier. If the deployment time and resource consumption is not significantly higher, I think it doesn’t cause an issue. But if there is a significant difference between PCMK and keepalived architecture, we would need to review that. B) Containerization of PCMK plans - eventually we would like to run the whole undercloud/overcloud on minimal OS in containers to keep improving the operations on the nodes (updates/upgrades/etc). If because PCMK we would be forever stuck on BM, it would be a bit of pita. As Michele said, maybe we can re-visit this. C) Unification of undercloud/overcloud is important for us, so +1 to whichever method is being used in both. But what I know, HA folks went to keepalived since it is simpler so would be good to keep in sync (and good we have their presence here actually) :) D) Undercloud HA is a nice have which I think we want to get to one day, but it is not in as big demand as for example edge deployments, BM provisioning with pure OS, or multiple envs managed by single undercloud. So even though undercloud HA is important, it won’t bring operators as many benefits as the previously mentioned improvements. Let’s keep it in mind when we are considering the amount of work needed for it. E) One of the use-cases we want to take into account is expanind a single-node deployment (all-in-one) to 3 node HA controller. I think it is important when evaluating PCMK/keepalived HTH — Jarda > On Jul 17, 2018, at 05:04, Emilien Macchi wrote: > > Thanks everyone for the feedback, I've made a quick PoC: > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default > > And I'm currently doing local testing. I'll publish results when progress is made, but I've made it so we have the choice to enable pacemaker (disabled by default), where keepalived would remain the default for now. > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari wrote: > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: > > [...] > > > > > The biggest downside IMO is the fact that our Pacemaker integration is > > > not containerized. Nor are there any plans to finish the > > > containerization of it. Pacemaker has to currently run on baremetal > > > and this makes the installation of it for small dev/test setups a lot > > > less desirable. It can launch containers just fine but the pacemaker > > > installation itself is what concerns me for the long term. > > > > > > Until we have plans for containizing it I suppose I would rather see > > > us keep keepalived as an option for these smaller setups. We can > > > certainly change our default Undercloud to use Pacemaker (if we choose > > > to do so). But having keepalived around for "lightweight" (zero or low > > > footprint) installs that work is really quite desirable. > > > > > > > That's a good point, and I agree with your proposal. > > Michele, what's the long term plan regarding containerized pacemaker? > > Well, we kind of started evaluating it (there was definitely not enough > time around pike/queens as we were busy landing the bundles code), then > due to discussions around k8s it kind of got off our radar. We can > at least resume the discussions around it and see how much effort it > would be. I'll bring it up with my team and get back to you. > > cheers, > Michele > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Tue Jul 17 16:02:54 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 17 Jul 2018 11:02:54 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> Message-ID: <5B4E132E.5050607@openstack.org> Ian raises some great points :) I'll try to address below... Ian Y. Choi wrote: > Hello, > > When I saw overall translation source strings on container whitepaper, > I would infer that new edge computing whitepaper > source strings would include HTML markup tags. One of the things I discussed with Ian and Frank in Vancouver is the expense of recreating PDFs with new translations. It's prohibitively expensive for the Foundation as it requires design resources which we just don't have. As a result, we created the Containers whitepaper in HTML, so that it could be easily updated w/o working with outside design contractors. I indicated that we would also be moving the Edge paper to HTML so that we could prevent that additional design resource cost. > On the other hand, the source strings of edge computing whitepaper > which I18n team previously translated do not include HTML markup tags, > since the source strings are based on just text format. The version that Akihiro put together was based on the Edge PDF, which we unfortunately didn't have the resources to implement in the same format. > > I really appreciate Akihiro's work on RST-based support on publishing > translated edge computing whitepapers, since > translators do not have to re-translate all the strings. I would like to second this. It took a lot of initiative to work on the RST-based translation. At the moment, it's just not usable for the reasons mentioned above. > On the other hand, it seems that I18n team needs to investigate on > translating similar strings of HTML-based edge computing whitepaper > source strings, which would discourage translators. Can you expand on this? I'm not entirely clear on why the HTML based translation is more difficult. > > That's my point of view on translating edge computing whitepaper. > > For translating container whitepaper, I want to further ask the > followings since *I18n-based tools* > would mean for translators that translators can test and publish > translated whitepapers locally: > > - How to build translated container whitepaper using original > Silverstripe-based repository? > https://docs.openstack.org/i18n/latest/tools.html describes well how > to build translated artifacts for RST-based OpenStack repositories > but I could not find the way how to build translated container > whitepaper with translated resources on Zanata. This is a little tricky. It's possible to set up a local version of the OpenStack website (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md). However, we have to manually ingest the po files as they are completed and then push them out to production, so that wouldn't do much to help with your local build. I'm open to suggestions on how we can make this process easier for the i18n team. Thank you, Jimmy > > > With many thanks, > > /Ian > > Jimmy McArthur wrote on 7/17/2018 11:01 PM: >> Frank, >> >> I'm sorry to hear about the displeasure around the Edge paper. As >> mentioned in a prior thread, the RST format that Akihiro worked did >> not work with the Zanata process that we have been using with our >> CMS. Additionally, the existing EDGE page is a PDF, so we had to >> build a new template to work with the new HTML whitepaper layout we >> created for the Containers paper. I outlined this in the thread " >> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >> with the template around 7/13. >> >> We completed the work on the new whitepaper template and then put out >> the pot files on Zanata so we can get the po language files back. If >> this process is too cumbersome for the translation team, I'm open to >> discussion, but right now our entire translation process is based on >> the official OpenStack Docs translation process outlined by the i18n >> team: https://docs.openstack.org/i18n/latest/en_GB/tools.html >> >> Again, I realize Akihiro put in some work on his own proposing the >> new translation type. If the i18n team is moving to this format >> instead, we can work on redoing our process. >> >> Please let me know if I can clarify further. >> >> Thanks, >> Jimmy >> >> Frank Kloeker wrote: >>> Hi Jimmy, >>> >>> permission was added for you and Sebastian. The Container Whitepaper >>> is on the Zanata frontpage now. But we removed Edge Computing >>> whitepaper last week because there is a kind of displeasure in the >>> team since the results of translation are still not published beside >>> Chinese version. It would be nice if we have a commitment from the >>> Foundation that results are published in a specific timeframe. This >>> includes your requirements until the translation should be available. >>> >>> thx Frank >>> >>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>> Sorry, I should have also added... we additionally need permissions so >>>> that we can add the a new version of the pot file to this project: >>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>> >>>> >>>> Thanks! >>>> Jimmy >>>> >>>> >>>> >>>> Jimmy McArthur wrote: >>>>> Hi all - >>>>> >>>>> We have both of the current whitepapers up and available for >>>>> translation. Can we promote these on the Zanata homepage? >>>>> >>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>> Thanks all! >>>>> Jimmy >>>> >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From emilien at redhat.com Tue Jul 17 16:12:11 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 17 Jul 2018 12:12:11 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition Message-ID: Your fellow reporter took a break from writing, but is now back on his pen. Welcome to the twenty-fifth edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky Milestone 3 is next week. After, any feature code will require Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a bug-fix only and stabilization period, until we can push the first stable version of Rocky. +--> Next PTG will be in Denver, please propose topics: https://etherpad.openstack.org/p/tripleoci-ptg-stein +--> Multiple squads are currently brainstorming a framework to provide validations pre/post upgrades - stay in touch! +------------------------------+ | Continuous Integration | +------------------------------+ +--> Sprint theme: migration to Zuul v3 (More on https://trello.com/c/vyWXcKOB/841-sprint-16-goals) +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI issue. +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day on Ocata. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Good progress on major upgrades workflow, need reviews! +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> We switched python-tripleoclient to deploy containerized undercloud by default! +--> Image prepare via workflow is still work in progress. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> UI integration is almost done (need review) +--> Bug with failure listing is being fixed: https://bugs.launchpad.net/tripleo/+bug/1779093 +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK etc: https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged) (need reviews). +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Good progress on network configuration via UI +--> Config-download patches are being reviewed and a lot of testing is going on. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Working on OpenShift validations, need reviews. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Working on Secrets management and Limit TripleO users efforts +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Elf owls live in a cacti. They are the smallest owls, and live in the southwestern United States and Mexico. It will sometimes make its home in the giant saguaro cactus, nesting in holes made by other animals. However, the elf owl isn’t picky and will also live in trees or on telephone poles. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jul 17 16:19:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 17 Jul 2018 17:19:56 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-29 Message-ID: HTML: https://anticdent.org/tc-report-18-29.html Again a relatively slow week for TC discussion. Several members were travelling for one reason or another. A theme from the past week is a recurring one: How can OpenStack, the community, highlight gaps where additional contribution may be needed, and what can the TC, specifically, do to help? Julia relayed [that question on Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16) and it meandered a bit from there. Are the mechanics of open source a bit strange in OpenStack because of continuing boundaries between the people who sell it, package it, build it, deploy it, operate it, and use it? If so, how do we accelerate blurring those boundaries? The [combined PTG](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T00:39:16) will help, some. At Thursday's office hours Alan Clark [listened in](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:02:34). He's a welcome presence from the Foundation Board. At the last summit in Vancouver members of the TC and the Board made a commitment to improve communication. Meanwhile, [back on Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-11.log.html#t2018-07-11T15:29:30) I expressed a weird sense of jealousy of all the nice visible things one sees the foundation doing for the newer strategic areas in the foundation. The issue here is not that the foundation doesn't do stuff for OpenStack-classic, but that the new stuff is visible and _over there_. That office hour included [more talk](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T15:07:27) about project-need visibility. Lately, I've been feeling that it is more important to make the gaps in contribution visible than it is to fill them. If we continue to perform above and beyond, there is no incentive for our corporate value extractors to supplement their investment. That way lies burnout. The [health tracker](https://wiki.openstack.org/wiki/OpenStack_health_tracker) is part of making things more visible. So are [OpenStack wide goals](https://governance.openstack.org/tc/goals/index.html). But there is more we can do as a community and as individuals. Don't be a hero: If you're overwhelmed or overworked tell your peers and your management. In other news: Zane summarized some of his thoughts about [Limitations of the Layered Model of OpenStack](https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations). This is a continuation of the technical vision discussions that have been happening on [an etherpad](https://etherpad.openstack.org/p/tech-vision-2018) and [email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-July/131955.html). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gfidente at redhat.com Tue Jul 17 16:21:32 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 17 Jul 2018 18:21:32 +0200 Subject: [openstack-dev] [tripleo] Updates/upgrades equivalent for external_deploy_tasks In-Reply-To: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> References: <1420c4ac-b6f6-5606-d1c8-1bb05a941d2e@redhat.com> Message-ID: <6f063c2e-efb8-a9c5-cd49-44b7c6a0942e@redhat.com> On 07/10/2018 04:20 PM, Jiří Stránský wrote: > Hi, > > with the move to config-download deployments, we'll be moving from > executing external installers (like ceph-ansible) via Heat resources > encapsulating Mistral workflows towards executing them via Ansible > directly (nested Ansible process via external_deploy_tasks). > > Updates and upgrades still need to be addressed here. I think we should > introduce external_update_tasks and external_upgrade_tasks for this > purpose, but i see two options how to construct the workflow with them. > > During update (mentioning just updates, but upgrades would be done > analogously) we could either: > > A) Run external_update_tasks, then external_deploy_tasks. > > This works with the assumption that updates are done very similarly to > deployment. The external_update_tasks could do some prep work and/or > export Ansible variables which then could affect what > external_deploy_tasks do (e.g. in case of ceph-ansible we'd probably > override the playbook path). This way we could also disable specific > parts of external_deploy_tasks on update, in case reuse is undesirable > in some places. thanks +1 on A from me as well we currently cycle through a list of playbooks to execute which can be given as a Heat parameter ... I suppose we'll need to find a way to make an ansible variable override the Heat value -- Giulio Fidente GPG KEY: 08D733BA From neil at tigera.io Tue Jul 17 16:28:57 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 17 Jul 2018 17:28:57 +0100 Subject: [openstack-dev] [neutron] How to look up a project name from Neutron server code? In-Reply-To: References: Message-ID: On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes wrote: > On 07/17/2018 03:36 AM, Neil Jerram wrote: > > Can someone help me with how to look up a project name (aka tenant name) > > for a known project/tenant ID, from code (specifically a mechanism > > driver) running in the Neutron server? > > > > I believe that means I need to make a GET REST call as here: > > https://developer.openstack.org/api-ref/identity/v3/index.html#projects. > But > > I don't yet understand how a piece of Neutron server code can ensure > > that it has the right credentials to do that. If someone happens to > > have actual code for doing this, I'm sure that would be very helpful. > > > > (I'm aware that whenever the Neutron server processes an API request, > > the project name for the project that generated that request is added > > into the request context. That is great when my code is running in an > > API request context. But there are other times when the code isn't in a > > request context and still needs to map from a project ID to project > > name; hence the question here.) > > Hi Neil, > > You basically answered your own question above :) The neutron request > context gets built from oslo.context's Context.from_environ() [1] which > has this note in the implementation [2]: > > # Load a new context object from the environment variables set by > # auth_token middleware. See: > # > > https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service > > So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. > If you don't have access to a HTTP headers, then you'll need to pass > some context object/struct to the code you're referring to. Might as > well pass the neutron RequestContext (derived from oslo_context.Context) > to the code you're referring to and you get all this for free. > > Best, > -jay > > [1] > > https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424 > > [2] > > https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435 Many thanks for this reply, Jay. If I'm understanding fully, I believe it all works beautifully so long as the Neutron server is processing a specific API request, e.g. a port CRUD operation. Then, as you say, the RequestContext includes the name of the project/tenant that originated that request. I have an additional requirement, though, to do a occasional audit of standing resources in the Neutron DB, and to check that my mechanism driver's programming for them is correct. To do that, I have an independent eventlet thread that runs in admin context and occasionally queries Neutron resources, e.g. all the ports. For each port, the Neutron DB data includes the project_id, but not project_name, and I'd like at that point to be able to map from the project_id for each port to project_name. Do you have any thoughts on how I could do that? (E.g. perhaps there is some way of generating and looping round a request with the project_id, such that the middleware populates the project_name... but that sounds a bit baroque; I would hope that there would be a way of doing a simpler Keystone DB lookup.) Regards, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolverine.av at gmail.com Tue Jul 17 16:47:17 2018 From: wolverine.av at gmail.com (Aditya Vaja) Date: Tue, 17 Jul 2018 22:17:17 +0530 Subject: [openstack-dev] [neutron] How to look up a project name from Neutron server code? In-Reply-To: References: Message-ID: <19a9a4f8-adba-4120-a4f8-3401f74b6b25@gmail.com> hey neil, neutron.conf has a section called ' [keystone_authtoken]’ which has credentials to query keystone as neutron. you can read the config as you’d typically do from the mechanism driver for any other property using oslo.config. you could then use python-keystoneclient with those creds to query the mapping. a sample is given in the keystoneclient repo [1]. via telegram [1] https://github.com/openstack/python-keystoneclient/blob/650716d0dd30a73ccabe3f0ec20eb722ca0d70d4/keystoneclient/v3/client.py#L102-L116 On Tue, Jul 17, 2018 at 9:58 PM, Neil Jerram wrote: On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes < jaypipes at gmail.com [jaypipes at gmail.com] > wrote: On 07/17/2018 03:36 AM, Neil Jerram wrote: > Can someone help me with how to look up a project name (aka tenant name) > for a known project/tenant ID, from code (specifically a mechanism > driver) running in the Neutron server? > > I believe that means I need to make a GET REST call as here: > https://developer.openstack.org/api-ref/identity/v3/index.html#projects [https://developer.openstack.org/api-ref/identity/v3/index.html#projects] . But > I don't yet understand how a piece of Neutron server code can ensure > that it has the right credentials to do that. If someone happens to > have actual code for doing this, I'm sure that would be very helpful. > > (I'm aware that whenever the Neutron server processes an API request, > the project name for the project that generated that request is added > into the request context. That is great when my code is running in an > API request context. But there are other times when the code isn't in a > request context and still needs to map from a project ID to project > name; hence the question here.) Hi Neil, You basically answered your own question above :) The neutron request context gets built from oslo.context's Context.from_environ() [1] which has this note in the implementation [2]: # Load a new context object from the environment variables set by # auth_token middleware. See: # https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service [https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service] So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. If you don't have access to a HTTP headers, then you'll need to pass some context object/struct to the code you're referring to. Might as well pass the neutron RequestContext (derived from oslo_context.Context) to the code you're referring to and you get all this for free. Best, -jay [1] https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424 [https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424] [2] https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435 [https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435] Many thanks for this reply, Jay. If I'm understanding fully, I believe it all works beautifully so long as the Neutron server is processing a specific API request, e.g. a port CRUD operation. Then, as you say, the RequestContext includes the name of the project/tenant that originated that request. I have an additional requirement, though, to do a occasional audit of standing resources in the Neutron DB, and to check that my mechanism driver's programming for them is correct. To do that, I have an independent eventlet thread that runs in admin context and occasionally queries Neutron resources, e.g. all the ports. For each port, the Neutron DB data includes the project_id, but not project_name, and I'd like at that point to be able to map from the project_id for each port to project_name. Do you have any thoughts on how I could do that? (E.g. perhaps there is some way of generating and looping round a request with the project_id, such that the middleware populates the project_name... but that sounds a bit baroque; I would hope that there would be a way of doing a simpler Keystone DB lookup.) Regards, Neil __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaleric at redhat.com Tue Jul 17 17:33:01 2018 From: jtaleric at redhat.com (Joe Talerico) Date: Tue, 17 Jul 2018 13:33:01 -0400 Subject: [openstack-dev] [Browbeat] proposing agpoi as core Message-ID: Proposing agpoi as core for OpenStack Browbeat. He has been instruemntal in taking over the CI components of Browbeat. His contributions and reviews reflect that! Thanks! Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaleric at redhat.com Tue Jul 17 17:37:13 2018 From: jtaleric at redhat.com (Joe Talerico) Date: Tue, 17 Jul 2018 13:37:13 -0400 Subject: [openstack-dev] [Browbeat] proposing agopi as core Message-ID: agopi** On Tue, Jul 17, 2018 at 1:33 PM, Joe Talerico wrote: > Proposing > ​agopi > as core for OpenStack Browbeat. He has been instruemntal in taking over > the CI components of Browbeat. His contributions and reviews reflect that! > > Thanks! > Joe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jul 17 17:52:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 17 Jul 2018 13:52:39 -0400 Subject: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project! Message-ID: <1531849678-sup-8719@lrrr.local> The Adjutant team's application [1] to become an official project has been approved. Welcome! As I said on the review, because it is past the deadline for Rocky membership, Adjutant will not be considered part of the Rocky release, but a future release can be part of Stein. The team should complete the onboarding process for new projects, including holding PTL elections for Stein, setting up deliverable files in the openstack/releases repository, and adding meeting information to eavesdrop.openstack.org. I have left a comment on the patch setting up the Stein election to ask that the Adjutant team be included. We can also add Adjutant to the list of projects on docs.openstack.org for Stein, after updating your publishing job(s). Doug [1] https://review.openstack.org/553643 From Kevin.Fox at pnnl.gov Tue Jul 17 18:09:43 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 17 Jul 2018 18:09:43 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com>, <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C157CED@EX10MBOX03.pnnl.gov> Inlining with KF> ________________________________________ From: Thierry Carrez [thierry at openstack.org] Sent: Tuesday, July 17, 2018 7:44 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 Finally found the time to properly read this... Zane Bitter wrote: > [...] > We chose to add features to Nova to compete with vCenter/oVirt, and not > to add features the would have enabled OpenStack as a whole to compete > with more than just the compute provisioning subset of EC2/Azure/GCP. Could you give an example of an EC2 action that would be beyond the "compute provisioning subset" that you think we should have built into Nova ? KF> How about this one... https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html :/ KF> IMO, its lack really crippled the use case. I've been harping on this one for over 4 years now... > Meanwhile, the other projects in OpenStack were working on building the > other parts of an AWS/Azure/GCP competitor. And our vague one-sentence > mission statement allowed us all to maintain the delusion that we were > all working on the same thing and pulling in the same direction, when in > truth we haven't been at all. Do you think that organizing (tying) our APIs along [micro]services, rather than building a sanely-organized user API on top of a sanely-organized set of microservices, played a role in that divide ? KF> Slightly off question I think. A combination of microservice api's + no api team to look at the api's as a whole allowed use cases to slip by. KF> Microservice api might have been ok with overall shepards. Though maybe that is what you were implying with 'sanely'? > We can decide that we want to be one, or the other, or both. But if we > don't all decide together then a lot of us are going to continue wasting > our time working at cross-purposes. If you are saying that we should choose between being vCenter or AWS, I would definitely say the latter. But I'm still not sure I see this issue in such a binary manner. KF> No, he said one, the either, or both. But the lack of decision allowed some teams to prioritize one without realizing its effects to others. KF> There are multiple vCenter replacements in opensource world. For example, oVirt. Its alreay way better at it then Nova. KF> There is not a replacement for AWS in the opensource world. The hope was OpenStack would be that, but others in the community did not agree with that vision. KF> Now that the community has changed drastically, what is the feeling now? We must decide. KF> Kubernetes has provided a solid base for doing cloudy things. Which is great. But the organization does not care to replace other AWS/Azure/etc services because there are companies interested in selling k8s on top of AWS/Azure/etc and integrate with the other services they already provide. KF> So, there is an Opportunity in the opensource community still for someone to write an opensource AWS alternative. VM's are just a very small part of it. KF> Is that OpenStack, or some other project? Imagine if (as suggested above) we refactored the compute node and give it a user API, would that be one, the other, both ? Or just a sane addition to improve what OpenStack really is today: a set of open infrastructure components providing different services with each their API, with slight gaps and overlaps between them ? Personally, I'm not very interested in discussing what OpenStack could have been if we started building it today. I'm much more interested in discussing what to add or change in order to make it usable for more use cases while continuing to serve the needs of our existing users. And I'm not convinced that's an either/or choice... KF> Sometimes it is time to hit the reset button because you either: a> you know something more then you did when you built that is really important b> the world changed and you can no longer going on the path you were c> the technical debt has grown very large and it is cheaper to start again KF> OpenStacks current architectural implementation really feels 1.0ish to me and all of those reasons are relevant. KF> I'm not saying we should just blindly hit the reset button. but I think it should be discussed/evaluated . Leaving it alone may have too much of a dragging effect on contribution. KF> I'm also not saying we leave existing users without a migration path either. Maybe an OpenStack 2.0 with migration tools would be an option. KF> OpenStacks architecture is really hamstringing it at this point. If it wants to make progress at chipping away at AWS, it can't be trying to build on top of the very narrow commons OpenStack provides at present and the boiler plate convention of 1, start new project 2, create sql databse, 3, create rabbit queues, 5, create api service, 6 create scheduler service, 7, create agents, 9, create keystone endpoints, 10, get it wrapped in 32 different deployment tools, 11, etc Thanks, Kevin -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From neil at tigera.io Tue Jul 17 18:17:50 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 17 Jul 2018 19:17:50 +0100 Subject: [openstack-dev] [neutron] How to look up a project name from Neutron server code? In-Reply-To: <19a9a4f8-adba-4120-a4f8-3401f74b6b25@gmail.com> References: <19a9a4f8-adba-4120-a4f8-3401f74b6b25@gmail.com> Message-ID: Thanks Aditya, that looks like just what I need. Best wishes, Neil On Tue, Jul 17, 2018 at 5:48 PM Aditya Vaja wrote: > hey neil, > > neutron.conf has a section called '[keystone_authtoken]’ which has > credentials to query keystone as neutron. you can read the config as you’d > typically do from the mechanism driver for any other property using > oslo.config. > > you could then use python-keystoneclient with those creds to query the > mapping. a sample is given in the keystoneclient repo [1]. > > via telegram > > [1] > https://github.com/openstack/python-keystoneclient/blob/650716d0dd30a73ccabe3f0ec20eb722ca0d70d4/keystoneclient/v3/client.py#L102-L116 > On Tue, Jul 17, 2018 at 9:58 PM, Neil Jerram wrote: > > On Tue, Jul 17, 2018 at 3:55 PM Jay Pipes wrote: > >> On 07/17/2018 03:36 AM, Neil Jerram wrote: >> > Can someone help me with how to look up a project name (aka tenant >> name) >> > for a known project/tenant ID, from code (specifically a mechanism >> > driver) running in the Neutron server? >> > >> > I believe that means I need to make a GET REST call as here: >> > https://developer.openstack.org/api-ref/identity/v3/index.html#projects. >> But >> > I don't yet understand how a piece of Neutron server code can ensure >> > that it has the right credentials to do that. If someone happens to >> > have actual code for doing this, I'm sure that would be very helpful. >> > >> > (I'm aware that whenever the Neutron server processes an API request, >> > the project name for the project that generated that request is added >> > into the request context. That is great when my code is running in an >> > API request context. But there are other times when the code isn't in a >> > request context and still needs to map from a project ID to project >> > name; hence the question here.) >> >> Hi Neil, >> >> You basically answered your own question above :) The neutron request >> context gets built from oslo.context's Context.from_environ() [1] which >> has this note in the implementation [2]: >> >> # Load a new context object from the environment variables set by >> # auth_token middleware. See: >> # >> >> https://docs.openstack.org/keystonemiddleware/latest/api/keystonemiddleware.auth_token.html#what-auth-token-adds-to-the-request-for-use-by-the-openstack-service >> >> So, basically, simply look at the HTTP headers for HTTP_X_PROJECT_NAME. >> If you don't have access to a HTTP headers, then you'll need to pass >> some context object/struct to the code you're referring to. Might as >> well pass the neutron RequestContext (derived from oslo_context.Context) >> to the code you're referring to and you get all this for free. >> >> Best, >> -jay >> >> [1] >> >> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L424 >> >> [2] >> >> https://github.com/openstack/oslo.context/blob/4abd5377e4d847102a4e87a528d689e31cc1713c/oslo_context/context.py#L433-L435 > > > Many thanks for this reply, Jay. > > If I'm understanding fully, I believe it all works beautifully so long as > the Neutron server is processing a specific API request, e.g. a port CRUD > operation. Then, as you say, the RequestContext includes the name of the > project/tenant that originated that request. > > I have an additional requirement, though, to do a occasional audit of > standing resources in the Neutron DB, and to check that my mechanism > driver's programming for them is correct. To do that, I have an independent > eventlet thread that runs in admin context and occasionally queries Neutron > resources, e.g. all the ports. For each port, the Neutron DB data includes > the project_id, but not project_name, and I'd like at that point to be able > to map from the project_id for each port to project_name. > > Do you have any thoughts on how I could do that? (E.g. perhaps there is > some way of generating and looping round a request with the project_id, > such that the middleware populates the project_name... but that sounds a > bit baroque; I would hope that there would be a way of doing a simpler > Keystone DB lookup.) > > Regards, > Neil > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Jul 17 18:18:21 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 17 Jul 2018 13:18:21 -0500 Subject: [openstack-dev] [oslo] Reminder about Oslo feature freeze In-Reply-To: References: Message-ID: <1a0abbec-bb24-d60b-db76-f7e733c08ae8@nemebean.com> And we are now officially in feature freeze for Oslo libraries. Only bugfixes should be going in at this point. I will note that the config drivers work is still in the process of merging because some of the later patches in that series got hung up on a unit test bug. I'm holding off on doing final feature releases until that has all merged. -Ben On 07/05/2018 11:46 AM, Ben Nemec wrote: > Hi, > > This is just a reminder that Oslo observes feature freeze earlier than > other projects so those projects have time to implement any new features > from Oslo.  Per the policy[1] we freeze one week before the non-client > library feature freeze, which is coming in two weeks.  Therefore, we > have about one week to land new features in Oslo.  Anything that misses > the deadline will most likely need to wait until Stein. > > Feel free to contact the Oslo team with any comments or questions.  Thanks. > > -Ben > > 1: > http://specs.openstack.org/openstack/oslo-specs/specs/policy/feature-freeze.html > From openstack at nemebean.com Tue Jul 17 18:25:40 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 17 Jul 2018 13:25:40 -0500 Subject: [openstack-dev] [oslo][all] Heads up for the new oslo.policy release Message-ID: <999f999c-a2c4-d9a9-6d04-f3d408c3a3a9@nemebean.com> I just wanted to send a quick note about the recent oslo.policy release which may impact some projects. Some new functionality was added that allows a context object to be passed in to the enforcer directly, but as part of that we added a check that the type of the object passed in was valid for use. This caused an issue in Glance's unit tests because they were mocking the context object and a Mock object didn't pass the type check. This was fixed in [1], but if any other projects have a similar pattern in their unit tests it is possible it may affect them as well. If you do run into any issues with this, please contact the Oslo team in #openstack-oslo or with the [oslo] tag on the mailing list so we can help resolve them. Thanks. -Ben 1: https://review.openstack.org/#/c/582995/ From sombrafam at gmail.com Tue Jul 17 19:06:29 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Tue, 17 Jul 2018 16:06:29 -0300 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach Message-ID: Hi Cinder and Nova folks, Working on some tests for our drivers, I stumbled upon this tempest test 'force_detach_volume' that is calling Cinder API passing a 'None' connector. At the time this was added several CIs went down, and people started discussing whether this (accepting/sending a None connector) would be the proper behavior for what is expected to a driver to do[1]. So, some of CIs started just skipping that test[2][3][4] and others implemented fixes that made the driver to disconnected the volume from all hosts if a None connector was received[5][6][7]. While implementing this fix seems to be straightforward, I feel that just removing the volume from all hosts is not the correct thing to do mainly considering that we can have multi-attach. So, my questions are: What is the best way to fix this problem? Should Cinder API continue to accept detachments with None connectors? If, so, what would be the effects on other Nova attachments for the same volume? Is there any side effect if the volume is not multi-attached? Additionally to this thread here, I should bring this topic to tomorrow's Cinder's meeting, so please join if you have something to share. Erlon ___________________ [1] https://bugs.launchpad.net/cinder/+bug/1686278 [2] https://openstack-ci-logs.aws.infinidat.com/14/578114/2/check/dsvm-tempest-infinibox-fc/14fa930/console.html [3] http://54.209.116.144/14/578114/2/check/kaminario-dsvm-tempest-full-iscsi/ce750c8/console.html [4] http://logs.openstack.netapp.com/logs/14/578114/2/upstream-check/cinder-cDOT-iSCSI/8e2c549/console.html#_2018-07-16_20_06_16_937286 [5] https://review.openstack.org/#/c/551832/1/cinder/volume/drivers/dell_emc/vnx/adapter.py [6] https://review.openstack.org/#/c/550324/2/cinder/volume/drivers/hpe/hpe_3par_common.py [7] https://review.openstack.org/#/c/536778/2/cinder/volume/drivers/infinidat.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jul 17 19:53:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 17 Jul 2018 15:53:19 -0400 Subject: [openstack-dev] [tc] Technical Committee update for 17 July Message-ID: <1531857118-sup-2154@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: - Add ansible-role-openstack-operations to governance https://review.openstack.org/#/c/578963/ - Add ansible-role-tripleo-* to TripleO project https://review.openstack.org/#/c/579952/1 Other approved changes: - update the PTI to use tox for building docs https://review.openstack.org/#/c/580495/ Office hour logs: (I sent the update late last week, so we have only had one office hour since the last update.) - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-12.log.html#t2018-07-12T14:59:39 == Ongoing Discussions == The Adjutant team application has been approved. Welcome! - https://review.openstack.org/553643 project team application - http://lists.openstack.org/pipermail/openstack-dev/2018-July/132308.html welcome announcement Tony and the rest of the election officials are scheduling the Stein PTL elections. - https://review.openstack.org/#/c/582109/ stein PTL election preparations Zane has updated his proposal for diversity requirements or guidance for new project teams. - https://review.openstack.org/#/c/567944/ == TC member actions/focus/discussions for the coming week(s) == We've made good progress on the health checks. If you anticipate having any trouble contacting your assigned teams before the PTG please let me know. Remember that we agreed to send status updates on initiatives separately to openstack-dev every two weeks. If you are working on something for which there has not been an update in a couple of weeks, please consider summarizing the status. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From michele at acksyn.org Tue Jul 17 20:00:25 2018 From: michele at acksyn.org (Michele Baldessari) Date: Tue, 17 Jul 2018 22:00:25 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> Message-ID: <20180717200025.GA23000@palahniuk.int.rhx> Hi Jarda, thanks for these perspectives, this is very valuable! On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote: > Not rooting for any approach here, just want to add a bit of factors which might play a role when deciding which way to go: > > A) Performance matters, we should be improving simplicity and speed of > deployments rather than making it heavier. If the deployment time and > resource consumption is not significantly higher, I think it doesn’t > cause an issue. But if there is a significant difference between PCMK > and keepalived architecture, we would need to review that. +1 Should the pcmk take substantially more time then I agree, not worth defaulting to it. Worth also exploring how we could tweak things to make the setup of the cluster a bit faster (on a single node we can lower certain wait operations) but full agreement on this point. > B) Containerization of PCMK plans - eventually we would like to run > the whole undercloud/overcloud on minimal OS in containers to keep > improving the operations on the nodes (updates/upgrades/etc). If > because PCMK we would be forever stuck on BM, it would be a bit of > pita. As Michele said, maybe we can re-visit this. So I briefly discussed this in our team, and while it could be re-explored, we need to be very careful about the tradeoffs. This would be another layer which would bring quite a bit of complexity (pcs commands would have to be run inside a container, speed tradeoffs, more limited possibilities when it comes to upgrading/updating, etc.) > C) Unification of undercloud/overcloud is important for us, so +1 to > whichever method is being used in both. But what I know, HA folks went > to keepalived since it is simpler so would be good to keep in sync > (and good we have their presence here actually) :) Right so to be honest, the choice of keepalived on the undercloud for VIP predates me and I was not directly involved, so I lack the exact background for that choice (and I could not quickly reconstruct it from git history). But I think it is/was a reasonable choice for what it needs doing, although I probably would have picked just configuring the extra VIPs on the interfaces and have one service less to care about. +1 in general on the unification, with the caveats that have been discussed so far. > D) Undercloud HA is a nice have which I think we want to get to one > day, but it is not in as big demand as for example edge deployments, > BM provisioning with pure OS, or multiple envs managed by single > undercloud. So even though undercloud HA is important, it won’t bring > operators as many benefits as the previously mentioned improvements. > Let’s keep it in mind when we are considering the amount of work > needed for it. +100 > E) One of the use-cases we want to take into account is expanind a > single-node deployment (all-in-one) to 3 node HA controller. I think > it is important when evaluating PCMK/keepalived Right, so to be able to implement this, there is no way around having pacemaker (at least today until we have galera and rabbit). It still does not mean we have to default to it, but if you want to scale beyond one node, then there is no other option atm. > HTH It did, thanks! Michele > — Jarda > > > On Jul 17, 2018, at 05:04, Emilien Macchi wrote: > > > > Thanks everyone for the feedback, I've made a quick PoC: > > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default > > > > And I'm currently doing local testing. I'll publish results when progress is made, but I've made it so we have the choice to enable pacemaker (disabled by default), where keepalived would remain the default for now. > > > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari wrote: > > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: > > > [...] > > > > > > > The biggest downside IMO is the fact that our Pacemaker integration is > > > > not containerized. Nor are there any plans to finish the > > > > containerization of it. Pacemaker has to currently run on baremetal > > > > and this makes the installation of it for small dev/test setups a lot > > > > less desirable. It can launch containers just fine but the pacemaker > > > > installation itself is what concerns me for the long term. > > > > > > > > Until we have plans for containizing it I suppose I would rather see > > > > us keep keepalived as an option for these smaller setups. We can > > > > certainly change our default Undercloud to use Pacemaker (if we choose > > > > to do so). But having keepalived around for "lightweight" (zero or low > > > > footprint) installs that work is really quite desirable. > > > > > > > > > > That's a good point, and I agree with your proposal. > > > Michele, what's the long term plan regarding containerized pacemaker? > > > > Well, we kind of started evaluating it (there was definitely not enough > > time around pike/queens as we were busy landing the bundles code), then > > due to discussions around k8s it kind of got off our radar. We can > > at least resume the discussions around it and see how much effort it > > would be. I'll bring it up with my team and get back to you. > > > > cheers, > > Michele > > -- > > Michele Baldessari > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > > Emilien Macchi > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From openstack at nemebean.com Tue Jul 17 21:20:13 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 17 Jul 2018 16:20:13 -0500 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <20180717200025.GA23000@palahniuk.int.rhx> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> Message-ID: On 07/17/2018 03:00 PM, Michele Baldessari wrote: > Hi Jarda, > > thanks for these perspectives, this is very valuable! > > On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote: >> Not rooting for any approach here, just want to add a bit of factors which might play a role when deciding which way to go: >> >> A) Performance matters, we should be improving simplicity and speed of >> deployments rather than making it heavier. If the deployment time and >> resource consumption is not significantly higher, I think it doesn’t >> cause an issue. But if there is a significant difference between PCMK >> and keepalived architecture, we would need to review that. > > +1 Should the pcmk take substantially more time then I agree, not worth > defaulting to it. Worth also exploring how we could tweak things > to make the setup of the cluster a bit faster (on a single node we can > lower certain wait operations) but full agreement on this point. > >> B) Containerization of PCMK plans - eventually we would like to run >> the whole undercloud/overcloud on minimal OS in containers to keep >> improving the operations on the nodes (updates/upgrades/etc). If >> because PCMK we would be forever stuck on BM, it would be a bit of >> pita. As Michele said, maybe we can re-visit this. > > So I briefly discussed this in our team, and while it could be > re-explored, we need to be very careful about the tradeoffs. > This would be another layer which would bring quite a bit of complexity > (pcs commands would have to be run inside a container, speed tradeoffs, > more limited possibilities when it comes to upgrading/updating, etc.) > >> C) Unification of undercloud/overcloud is important for us, so +1 to >> whichever method is being used in both. But what I know, HA folks went >> to keepalived since it is simpler so would be good to keep in sync >> (and good we have their presence here actually) :) > > Right so to be honest, the choice of keepalived on the undercloud for > VIP predates me and I was not directly involved, so I lack the exact > background for that choice (and I could not quickly reconstruct it from git > history). But I think it is/was a reasonable choice for what it needs > doing, although I probably would have picked just configuring the extra > VIPs on the interfaces and have one service less to care about. > +1 in general on the unification, with the caveats that have been > discussed so far. The only reason there even are vips on the undercloud is that we wanted ssl support, and we implemented that through the same haproxy puppet manifest as the overcloud, which required vips. Keepalived happened to be what it was using to provide vips at the time, so that's what we ended up with. There wasn't a conscious decision to use keepalived over anything else. > >> D) Undercloud HA is a nice have which I think we want to get to one >> day, but it is not in as big demand as for example edge deployments, >> BM provisioning with pure OS, or multiple envs managed by single >> undercloud. So even though undercloud HA is important, it won’t bring >> operators as many benefits as the previously mentioned improvements. >> Let’s keep it in mind when we are considering the amount of work >> needed for it. > > +100 I'm still of the opinion that undercloud HA shouldn't be a thing. It brings with it a whole host of problems and I have yet to hear a realistic use case that actually requires it. We were quite careful to make sure that the overcloud can continue to run indefinitely without the undercloud during downtime. *Maybe* sometime in the future when those other features are implemented it will make more sense, but I don't think it does right now. > >> E) One of the use-cases we want to take into account is expanind a >> single-node deployment (all-in-one) to 3 node HA controller. I think >> it is important when evaluating PCMK/keepalived > > Right, so to be able to implement this, there is no way around having > pacemaker (at least today until we have galera and rabbit). > It still does not mean we have to default to it, but if you want to > scale beyond one node, then there is no other option atm. > >> HTH > > It did, thanks! > > Michele >> — Jarda >> >>> On Jul 17, 2018, at 05:04, Emilien Macchi wrote: >>> >>> Thanks everyone for the feedback, I've made a quick PoC: >>> https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-default >>> >>> And I'm currently doing local testing. I'll publish results when progress is made, but I've made it so we have the choice to enable pacemaker (disabled by default), where keepalived would remain the default for now. >>> >>> On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari wrote: >>> On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: >>>> On Mon, Jul 16, 2018 at 11:42 AM Dan Prince wrote: >>>> [...] >>>> >>>>> The biggest downside IMO is the fact that our Pacemaker integration is >>>>> not containerized. Nor are there any plans to finish the >>>>> containerization of it. Pacemaker has to currently run on baremetal >>>>> and this makes the installation of it for small dev/test setups a lot >>>>> less desirable. It can launch containers just fine but the pacemaker >>>>> installation itself is what concerns me for the long term. >>>>> >>>>> Until we have plans for containizing it I suppose I would rather see >>>>> us keep keepalived as an option for these smaller setups. We can >>>>> certainly change our default Undercloud to use Pacemaker (if we choose >>>>> to do so). But having keepalived around for "lightweight" (zero or low >>>>> footprint) installs that work is really quite desirable. >>>>> >>>> >>>> That's a good point, and I agree with your proposal. >>>> Michele, what's the long term plan regarding containerized pacemaker? >>> >>> Well, we kind of started evaluating it (there was definitely not enough >>> time around pike/queens as we were busy landing the bundles code), then >>> due to discussions around k8s it kind of got off our radar. We can >>> at least resume the discussions around it and see how much effort it >>> would be. I'll bring it up with my team and get back to you. >>> >>> cheers, >>> Michele >>> -- >>> Michele Baldessari >>> C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> -- >>> Emilien Macchi >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Tue Jul 17 21:53:59 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 17 Jul 2018 16:53:59 -0500 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach In-Reply-To: References: Message-ID: <20180717215359.GA31698@sm-workstation> On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote: > Hi Cinder and Nova folks, > > Working on some tests for our drivers, I stumbled upon this tempest test > 'force_detach_volume' > that is calling Cinder API passing a 'None' connector. At the time this was > added several CIs > went down, and people started discussing whether this (accepting/sending a > None connector) > would be the proper behavior for what is expected to a driver to do[1]. So, > some of CIs started > just skipping that test[2][3][4] and others implemented fixes that made the > driver to disconnected > the volume from all hosts if a None connector was received[5][6][7]. Right, it was determined the correct behavior for this was to disconnect the volume from all hosts. The CIs that are skipping this test should stop doing so (once their drivers are fixed of course). > > While implementing this fix seems to be straightforward, I feel that just > removing the volume > from all hosts is not the correct thing to do mainly considering that we > can have multi-attach. > I don't think multiattach makes a difference here. Someone is forcibly detaching the volume and not specifying an individual connection. So based on that, Cinder should be removing any connections, whether that is to one or several hosts. > So, my questions are: What is the best way to fix this problem? Should > Cinder API continue to > accept detachments with None connectors? If, so, what would be the effects > on other Nova > attachments for the same volume? Is there any side effect if the volume is > not multi-attached? > > Additionally to this thread here, I should bring this topic to tomorrow's > Cinder's meeting, > so please join if you have something to share. > +1 - good plan. From adriant at catalyst.net.nz Wed Jul 18 00:19:17 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 18 Jul 2018 12:19:17 +1200 Subject: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project! In-Reply-To: <1531849678-sup-8719@lrrr.local> References: <1531849678-sup-8719@lrrr.local> Message-ID: Thanks! As the current project lead for Adjutant I welcome the news, and while I know it wasn't an easy process would like to thank everyone involved in the voting. All the feedback (good and bad) will be taken on board to make the service as suited for OpenStack as possible in the space we've decided it can fit. Now to onboarding, choosing a suitable service type, and preparing for a busy Stein cycle! - Adrian On 18/07/18 05:52, Doug Hellmann wrote: > The Adjutant team's application [1] to become an official project > has been approved. Welcome! > > As I said on the review, because it is past the deadline for Rocky > membership, Adjutant will not be considered part of the Rocky > release, but a future release can be part of Stein. > > The team should complete the onboarding process for new projects, > including holding PTL elections for Stein, setting up deliverable > files in the openstack/releases repository, and adding meeting > information to eavesdrop.openstack.org. > > I have left a comment on the patch setting up the Stein election > to ask that the Adjutant team be included. We can also add Adjutant > to the list of projects on docs.openstack.org for Stein, after > updating your publishing job(s). > > Doug > > [1] https://review.openstack.org/553643 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Wed Jul 18 01:12:30 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 17 Jul 2018 21:12:30 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> Message-ID: <052e02cf-750e-5755-2494-a2ef4ed73a3d@redhat.com> On 17/07/18 10:44, Thierry Carrez wrote: > Finally found the time to properly read this... For anybody else who found the wall of text challenging, I distilled the longest part into a blog post: https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations > Zane Bitter wrote: >> [...] >> We chose to add features to Nova to compete with vCenter/oVirt, and >> not to add features the would have enabled OpenStack as a whole to >> compete with more than just the compute provisioning subset of >> EC2/Azure/GCP. > > Could you give an example of an EC2 action that would be beyond the > "compute provisioning subset" that you think we should have built into > Nova ? Automatic provision/rotation of application credentials. Reliable, user-facing event notifications. Collection of usage data suitable for autoscaling, billing, and whatever it is that Watcher does. >> Meanwhile, the other projects in OpenStack were working on building >> the other parts of an AWS/Azure/GCP competitor. And our vague >> one-sentence mission statement allowed us all to maintain the delusion >> that we were all working on the same thing and pulling in the same >> direction, when in truth we haven't been at all. > > Do you think that organizing (tying) our APIs along [micro]services, > rather than building a sanely-organized user API on top of a > sanely-organized set of microservices, played a role in that divide ? TBH, not really. If I were making a list of contributing factors I would probably put 'path dependence' at #1, #2 and #3. At the start of this discussion, Jay posted on IRC a list of things that he thought shouldn't have been in the Nova API[1]: - flavors - shelve/unshelve - instance groups - boot from volume where nova creates the volume during boot - create me a network on boot - num_instances > 1 when launching - evacuate - host-evacuate-live - resize where the user 'confirms' the operation - force/ignore host - security groups in the compute API - force delete server - restore soft deleted server - lock server - create backup Some of those are trivially composable in higher-level services (e.g. boot from volume where nova creates the volume, get me a network, security groups). I agree with Jay that in retrospect it would have been cleaner to delegate those to some higher level than the Nova API (or, equivalently, for some lower-level API to exist within what is now Nova). And maybe if we'd had a top-level API like that we'd have been more aware of the ways that the lower-level ones lacked legibility for orchestration tools (oaktree is effectively an example of a top-level API like this, I'm sure Monty can give us a list of complaints ;) But others on the list involve operations at a low level that don't appear to me to be composable out of simpler operations. (Maybe Jay has a shorter list of low-level APIs that could be combined to implement all of these, I don't know.) Once we decided to add those features, it was inevitable that they would reach right the way down through the stack to the lowest level. There's nothing _organisational_ stopping Nova from creating an internal API (it need not even be a ReST API) for the 'plumbing' parts, with a separate layer that does orchestration-y stuff. That they're not doing so suggests to me that they don't think this is the silver bullet for managing complexity. What would have been a silver bullet is saying 'no' to a bunch of those features, preferably starting with 'restore soft deleted server'(!!) and shelve/unshelve(?!). When AWS got feature requests like that they didn't say 'we'll have to add that in a higher-level API', they said 'if your application needs that then cloud is not for you'. We were never prepared to say that. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >> We can decide that we want to be one, or the other, or both. But if we >> don't all decide together then a lot of us are going to continue >> wasting our time working at cross-purposes. > > If you are saying that we should choose between being vCenter or AWS, I > would definitely say the latter. Agreed. > But I'm still not sure I see this issue > in such a binary manner. I don't know that it's still a viable option to say 'AWS' now. Given our installed base of users and our commitment to not breaking them, our practical choices may well be between 'vCenter' or 'both'. It's painful because had we chosen 'AWS' at the beginning then we could have avoided the complexity hit of many of those features listed above, and spent our complexity budget on cloud features instead. Now we are locked in to supporting that legacy complexity forever, and it has reportedly maxed out our complexity budget to the point where people are reluctant to implement any cloud features, and unable to refactor to make them easier. Astute observers will note that this is a *textbook* case of the Innovator's Dilemma. > Imagine if (as suggested above) we refactored the compute node and give > it a user API, would that be one, the other, both ? In itself, it would have no effect. But if the refactor made the code easier to maintain, it might increase the willingness to move from one to both. > Or just a sane > addition to improve what OpenStack really is today: a set of open > infrastructure components providing different services with each their > API, with slight gaps and overlaps between them ? If nothing else, it would make it possible for somebody (probably Jay ;) to write a simpler compute API without any legacy cruft. Then at least when the Nova API's lunch gets eaten it might be by something in OpenStack rather than something like kubevirt. > Personally, I'm not very interested in discussing what OpenStack could > have been if we started building it today. I'm much more interested in > discussing what to add or change in order to make it usable for more use > cases while continuing to serve the needs of our existing users. It feels strange to argue against this, because it's the exact same philosophy of bottom-up incremental change that I've pushed for many, many years. However, I'm increasingly of the opinion that in some circumstances - particularly when some of your fundamental assumptions have changed, or you realise you had the wrong model of the problem - it's more helpful to step back and imagine how things would look if you were designing from scratch. And only _then_ look for incremental ways to get closer to that design. Skipping that step tends to lead to either (a) patchwork solutions that lack conceptual integrity, or (b) giving up and sticking with what you have. And often both, now that I think about it. > And I'm > not convinced that's an either/or choice... I said specifically that it's an either/or/and choice. So it's not a binary choice but it's very much a ternary choice IMHO. The middle ground, where each project - or even each individual contributor within a project - picks an option independently and proceeds on the implicit assumption that everyone else chose the same option (although - spoiler alert - they didn't)... that's not a good place to be. cheers, Zane. From iwienand at redhat.com Wed Jul 18 04:42:10 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 18 Jul 2018 14:42:10 +1000 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> Message-ID: <54887540-1350-708f-dcb4-0dec4bfac7b3@redhat.com> On 07/13/2018 06:38 AM, Thomas Goirand wrote: > Now, both Debian and Ubuntu have Python 3.7. Every package which I > upload in Sid need to support that. Yet, OpenStack's CI is still > lagging with Python 3.5. OpenStack's CI is rather broad -- I'm going to assume we're talking about whole-system devstack-ish based functional tests. Yes most testing is on Xenial and hence Python 3.5 We have Python 3.6 available via Bionic nodes. I think current talk is to look at mass-updates after the next release. Such updates, from history, are fairly disruptive. > I'm aware that there's been some attempts in the OpenStack infra to > have Debian Sid (which is probably the distribution getting the > updates the faster). We do not currently build Debian sid images, or mirror the unstable repos or do wheel builds for Debian. diskimage-builder also doesn't test it in CI. This is not to say it can't be done. > If it cannot happen with Sid, then I don't know, choose another > platform, and do the Python 3-latest gating... Fedora has been consistently updated in OpenStack Infra for many years. IMO, and from my experience, six-monthly-ish updates are about as frequent as can be practically handled. The ideal is that a (say) Neutron dev gets a clear traceback from a standard Python error in their change and happily fixes it. The reality is probably more like this developer gets a tempest failure due to nova failing to boot a cirros image, stemming from a detached volume due to a qemu bug that manifests due to a libvirt update (I'm exaggerating, I know :). That sort of deeply tangled platform issue always exists; however it is armortised across the lifespan of the testing. So several weeks after we update all these key components, a random Neutron dev can be pretty sure that submitting their change is actually testing *their* change, and not really a defacto test of every other tangentially related component. A small, but real example; uwsgi wouldn't build with the gcc/glibc combo on Fedora 28 for two months after its release until uwsgi's 2.0.17.1. Fedora carried patches; but of course there were a lot previously unconsidered assumptions in devstack around deployment that made using the packaged versions difficult [1] (that stack still hasn't received any reviews). Nobody would claim diskimage-builder is the greatest thing ever, but it does produce our customised images in a wide variety of formats that runs in our very heterogeneous clouds. It's very reactive -- we don't know about package updates until they hit the distro, and sometimes that breaks assumptions. It's largely taken for granted in our CI, but it takes a constant sustained effort across the infra team to make sure we have somewhere to test. I hear myself sounding negative, but I think it's a fundamental problem. You can't be dragging in the latest of everything AND expect that you won't be constantly running off fixing weird things you never even knew existed. We can (and do) get to the bottom of these things, but if the platform changes again before you've even fixed the current issue, things start piling up. If the job is constantly broken it gets ignored -- if a non-voting job fails in the woods, does it make a sound? :) > When this happens, moving faster with Python 3 versions will be > mandatory for everyone, not only for fools like me who made the > switch early. This is a long way of saying that - IMO - the idea of putting out a Debian sid image daily (to a lesser degree Buster images) and throwing a project's devstack runs against it is unlikely to produce a good problems-avoided : development-resources ratio. However, prove me wrong :) If people would like to run their master against Fedora (note OpenStack's stable branch lifespan is generally longer than a given Fedora release is supported, so it is not much good there) you have later packages, but still a fairly practical 6-month-ish stability cadence. I'm happy to help (some projects do already). > With my rant done :) ... there's already discussion around multiple python versions, containers, etc in [2]. While I'm reserved about the idea of full platform functional tests, essentially having a wide-variety of up-to-date tox environments using some of the methods discussed there is, I think, a very practical way to be cow-catching some of the bigger issues with Python version updates. If we are to expend resources, my 2c worth is that pushing in that direction gives the best return on effort. -i [1] https://review.openstack.org/#/c/565923/ [2] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132152.html From aschadin at sbcloud.ru Wed Jul 18 06:59:33 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Wed, 18 Jul 2018 06:59:33 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: <29EA595B-99AC-407F-AB31-F988A54B6F8D@sbcloud.ru> Watcher team, It’s just a reminder we will have meeting today at 08:00 UTC on #openstack-meeting-alt channel. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jul 18 07:35:35 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 18 Jul 2018 09:35:35 +0200 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: References: Message-ID: <9b4991b7-1bd6-8a2e-1b35-0be8d3920821@redhat.com> Dear Stackers, Seeing the answers on and off-list, we're moving forward! So, here are the first steps: A blueprint has been created: https://blueprints.launchpad.net/tripleo/+spec/validation-framework I've started a draft of the spec, based on the feedbacks and discussions I could have: https://review.openstack.org/#/c/583475/ Please, feel free to comment the spec and add your thoughts - this is a really great opportunity to get a proper validation framework in tripleoclient directly. Thank you for your feedback and attention. Cheers, C. On 07/16/2018 05:27 PM, Cédric Jeanneret wrote: > Dear Stackers, > > In order to let operators properly validate their undercloud node, I > propose to create a new subcommand in the "openstack undercloud" "tree": > `openstack undercloud validate' > > This should only run the different validations we have in the > undercloud_preflight.py¹ > That way, an operator will be able to ensure all is valid before > starting "for real" any other command like "install" or "upgrade". > > Of course, this "validate" step is embedded in the "install" and > "upgrade" already, but having the capability to just validate without > any further action is something that can be interesting, for example: > > - ensure the current undercloud hardware/vm is sufficient for an update > - ensure the allocated VM for the undercloud is sufficient for a deploy > - and so on > > There are probably other possibilities, if we extend the "validation" > scope outside the "undercloud" (like, tripleo, allinone, even overcloud). > > What do you think? Any pros/cons/thoughts? > > Cheers, > > C. > > > > ¹ > http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gergely.csatari at nokia.com Wed Jul 18 08:01:48 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Wed, 18 Jul 2018 08:01:48 +0000 Subject: [openstack-dev] [edge][glance]: Image handling in edge environment Message-ID: Hi, We had a great Forum session about image handling in edge environment in Vancouver [1]. As one outcome of the session I've created a wiki with the mentioned architecture options [1]. During the Edge Working Group [3] discussions we identified some questions (some of them are in the wiki, some of them are in mails [4]) and also I would like to get some feedback on the analyzis in the wiki from people who know Glance. I think the best would be to have some kind of meeting and I see two options to organize this: * Organize a dedicated meeting for this * Add this topic as an agenda point to the Glance weekly meeting Please share your preference and/or opinion. Thanks, Gerg0 [1]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [3]: https://wiki.openstack.org/wiki/Edge_Computing_Group [4]: http://lists.openstack.org/pipermail/edge-computing/2018-June/000239.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Jul 18 09:02:27 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 18 Jul 2018 11:02:27 +0200 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach In-Reply-To: <20180717215359.GA31698@sm-workstation> References: <20180717215359.GA31698@sm-workstation> Message-ID: <20180718090227.thr2kb2336vptaos@localhost> On 17/07, Sean McGinnis wrote: > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote: > > Hi Cinder and Nova folks, > > > > Working on some tests for our drivers, I stumbled upon this tempest test > > 'force_detach_volume' > > that is calling Cinder API passing a 'None' connector. At the time this was > > added several CIs > > went down, and people started discussing whether this (accepting/sending a > > None connector) > > would be the proper behavior for what is expected to a driver to do[1]. So, > > some of CIs started > > just skipping that test[2][3][4] and others implemented fixes that made the > > driver to disconnected > > the volume from all hosts if a None connector was received[5][6][7]. > > Right, it was determined the correct behavior for this was to disconnect the > volume from all hosts. The CIs that are skipping this test should stop doing so > (once their drivers are fixed of course). > > > > > While implementing this fix seems to be straightforward, I feel that just > > removing the volume > > from all hosts is not the correct thing to do mainly considering that we > > can have multi-attach. > > > > I don't think multiattach makes a difference here. Someone is forcibly > detaching the volume and not specifying an individual connection. So based on > that, Cinder should be removing any connections, whether that is to one or > several hosts. > Hi, I agree with Sean, drivers should remove all connections for the volume. Even without multiattach there are cases where you'll have multiple connections for the same volume, like in a Live Migration. It's also very useful when Nova and Cinder get out of sync and your volume has leftover connections. In this case if you try to delete the volume you get a "volume in use" error from some drivers. Cheers, Gorka. > > So, my questions are: What is the best way to fix this problem? Should > > Cinder API continue to > > accept detachments with None connectors? If, so, what would be the effects > > on other Nova > > attachments for the same volume? Is there any side effect if the volume is > > not multi-attached? > > > > Additionally to this thread here, I should bring this topic to tomorrow's > > Cinder's meeting, > > so please join if you have something to share. > > > > +1 - good plan. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Wed Jul 18 12:25:15 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 18 Jul 2018 21:25:15 +0900 Subject: [openstack-dev] [nova]API update week 12-18 Message-ID: <164ad5a0572.e100a65144161.8675617833238969078@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== What we discussed this week: - Discussion on priority BP and remaining reviews on those. - picked up 3 in-progress bug's patches and reviewed. Planned Features : ============== Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad- https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: I sent mail to author but no response yet. I will push the code update during next week early. 2. Abort live migration in queued state: - https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status - https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) - Weekly Progress: API patch is in gate to merge. nova client patch is remaining to mark this complete (Kevin mentioned he is working on that). 3. Complex anti-affinity policies: - https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies - https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) - Weekly Progress: API patch is merged. nova client and 1 follow up patch is remaining. 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: No progress. 5. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky - Weekly Progress: I pushed patches for part-2 (server_create merge). I will work on pushing last part-3 max by early next week. 6. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) - Weekly Progress: Code is up and matt has reviewed few patches. API subteam will target this BP as other BP work are almost merged. Bugs: ==== Did review on in-progress bugs's patches. This week Bug Progress: Critical: 0->0 High importance: 3->3 By Status: New: 0->0 Confirmed/Triage: 31-> 29 In-progress: 36->36 Incomplete: 4->4 ===== Total: 70->69 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report -gmann From tobias.rydberg at citynetwork.eu Wed Jul 18 13:40:29 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Wed, 18 Jul 2018 15:40:29 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting tomorrow for Public Cloud WG Message-ID: Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all tomorrow at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From dprince at redhat.com Wed Jul 18 15:07:04 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 18 Jul 2018 11:07:04 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <20180717200025.GA23000@palahniuk.int.rhx> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> Message-ID: <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> On Tue, 2018-07-17 at 22:00 +0200, Michele Baldessari wrote: > Hi Jarda, > > thanks for these perspectives, this is very valuable! > > On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote: > > Not rooting for any approach here, just want to add a bit of > > factors which might play a role when deciding which way to go: > > > > A) Performance matters, we should be improving simplicity and speed > > of > > deployments rather than making it heavier. If the deployment time > > and > > resource consumption is not significantly higher, I think it > > doesn’t > > cause an issue. But if there is a significant difference between > > PCMK > > and keepalived architecture, we would need to review that. > > +1 Should the pcmk take substantially more time then I agree, not > worth > defaulting to it. Worth also exploring how we could tweak things > to make the setup of the cluster a bit faster (on a single node we > can > lower certain wait operations) but full agreement on this point. > > > B) Containerization of PCMK plans - eventually we would like to run > > the whole undercloud/overcloud on minimal OS in containers to keep > > improving the operations on the nodes (updates/upgrades/etc). If > > because PCMK we would be forever stuck on BM, it would be a bit of > > pita. As Michele said, maybe we can re-visit this. > > So I briefly discussed this in our team, and while it could be > re-explored, we need to be very careful about the tradeoffs. > This would be another layer which would bring quite a bit of > complexity > (pcs commands would have to be run inside a container, speed > tradeoffs, > more limited possibilities when it comes to upgrading/updating, etc.) > > > C) Unification of undercloud/overcloud is important for us, so +1 > > to > > whichever method is being used in both. But what I know, HA folks > > went > > to keepalived since it is simpler so would be good to keep in sync > > (and good we have their presence here actually) :) > > Right so to be honest, the choice of keepalived on the undercloud for > VIP predates me and I was not directly involved, so I lack the exact > background for that choice (and I could not quickly reconstruct it > from git > history). But I think it is/was a reasonable choice for what it needs > doing, although I probably would have picked just configuring the > extra > VIPs on the interfaces and have one service less to care about. > +1 in general on the unification, with the caveats that have been > discussed so far. I think it was more of that we wanted to use HAProxy for SSL termination and keepalived is a simple enough way to set this up. Instack-Undercloud has used HAProxy/keepalived for years in this manner. I think this came up recently because downstream we did not have a keepalived container. So it got a bit of spotlight on it as to why we were using it. We do have a keepalived RPM and its worked as it has for years already so as far as single node/undercloud setups go I think it would continue to work fine. Kolla has had and supports the keepalived container for awhile now as well. --- Comments on this thread seem to cover 2 main themes to me. Simplification and the desire to use the same architecture as the Overcloud (Pacemaker). And there is some competition between them. For simplification: If we can eliminate keepalived and still use HAProxy (thus keeping the SSL termination features working) then I think that would be worth trying. Specifically can we eliminate Keepalived without swapping in Pacemaker? Michele: if you have ideas here lets try them! With regards to Pacemaker I think we need to make an exception. It seems way too heavy for single node setups and increases the complexity there for very little benefit. To me the shared architecture for TripleO is the tools we use to setup services. By using t-h-t to drive our setup of the Undercloud and All-In-One installers we are already gaining a lot of benefit here. Pacemaker is weird as it is kind of augments the architecture a bit (HA architecture). But Pacemaker is also a service that gets configured by TripleO. So it kind of falls into both categories. Pacemaker gives us features we need in the Overcloud at the cost of some extra complexity. And in addition to all this we are still running the Pacemaker processes themselves on baremetal. All this just to say we are running the same "architecture" on both the Undercloud and Overcloud? I'm not a fan. Dan > > > D) Undercloud HA is a nice have which I think we want to get to one > > day, but it is not in as big demand as for example edge > > deployments, > > BM provisioning with pure OS, or multiple envs managed by single > > undercloud. So even though undercloud HA is important, it won’t > > bring > > operators as many benefits as the previously mentioned > > improvements. > > Let’s keep it in mind when we are considering the amount of work > > needed for it. > > +100 > > > E) One of the use-cases we want to take into account is expanind a > > single-node deployment (all-in-one) to 3 node HA controller. I > > think > > it is important when evaluating PCMK/keepalived > > Right, so to be able to implement this, there is no way around having > pacemaker (at least today until we have galera and rabbit). > It still does not mean we have to default to it, but if you want to > scale beyond one node, then there is no other option atm. > > > HTH > > It did, thanks! > > Michele > > — Jarda > > > > > On Jul 17, 2018, at 05:04, Emilien Macchi > > > wrote: > > > > > > Thanks everyone for the feedback, I've made a quick PoC: > > > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-de > > > fault > > > > > > And I'm currently doing local testing. I'll publish results when > > > progress is made, but I've made it so we have the choice to > > > enable pacemaker (disabled by default), where keepalived would > > > remain the default for now. > > > > > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari > > n.org> wrote: > > > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince > > > > wrote: > > > > [...] > > > > > > > > > The biggest downside IMO is the fact that our Pacemaker > > > > > integration is > > > > > not containerized. Nor are there any plans to finish the > > > > > containerization of it. Pacemaker has to currently run on > > > > > baremetal > > > > > and this makes the installation of it for small dev/test > > > > > setups a lot > > > > > less desirable. It can launch containers just fine but the > > > > > pacemaker > > > > > installation itself is what concerns me for the long term. > > > > > > > > > > Until we have plans for containizing it I suppose I would > > > > > rather see > > > > > us keep keepalived as an option for these smaller setups. We > > > > > can > > > > > certainly change our default Undercloud to use Pacemaker (if > > > > > we choose > > > > > to do so). But having keepalived around for "lightweight" > > > > > (zero or low > > > > > footprint) installs that work is really quite desirable. > > > > > > > > > > > > > That's a good point, and I agree with your proposal. > > > > Michele, what's the long term plan regarding containerized > > > > pacemaker? > > > > > > Well, we kind of started evaluating it (there was definitely not > > > enough > > > time around pike/queens as we were busy landing the bundles > > > code), then > > > due to discussions around k8s it kind of got off our radar. We > > > can > > > at least resume the discussions around it and see how much effort > > > it > > > would be. I'll bring it up with my team and get back to you. > > > > > > cheers, > > > Michele > > > -- > > > Michele Baldessari > > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > > > _________________________________________________________________ > > > _________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > subscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > -- > > > Emilien Macchi > > > _________________________________________________________________ > > > _________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > subscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From waboring at hemna.com Wed Jul 18 15:56:07 2018 From: waboring at hemna.com (Walter Boring) Date: Wed, 18 Jul 2018 11:56:07 -0400 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach In-Reply-To: <20180718090227.thr2kb2336vptaos@localhost> References: <20180717215359.GA31698@sm-workstation> <20180718090227.thr2kb2336vptaos@localhost> Message-ID: The whole purpose of this test is to simulate the case where Nova doesn't know where the vm is anymore, or may simply not exist, but we need to clean up the cinder side of things. That being said, with the new attach API, the connector is being saved in the cinder database for each volume attachment. Walt On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor wrote: > On 17/07, Sean McGinnis wrote: > > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote: > > > Hi Cinder and Nova folks, > > > > > > Working on some tests for our drivers, I stumbled upon this tempest > test > > > 'force_detach_volume' > > > that is calling Cinder API passing a 'None' connector. At the time > this was > > > added several CIs > > > went down, and people started discussing whether this > (accepting/sending a > > > None connector) > > > would be the proper behavior for what is expected to a driver to > do[1]. So, > > > some of CIs started > > > just skipping that test[2][3][4] and others implemented fixes that > made the > > > driver to disconnected > > > the volume from all hosts if a None connector was received[5][6][7]. > > > > Right, it was determined the correct behavior for this was to disconnect > the > > volume from all hosts. The CIs that are skipping this test should stop > doing so > > (once their drivers are fixed of course). > > > > > > > > While implementing this fix seems to be straightforward, I feel that > just > > > removing the volume > > > from all hosts is not the correct thing to do mainly considering that > we > > > can have multi-attach. > > > > > > > I don't think multiattach makes a difference here. Someone is forcibly > > detaching the volume and not specifying an individual connection. So > based on > > that, Cinder should be removing any connections, whether that is to one > or > > several hosts. > > > > Hi, > > I agree with Sean, drivers should remove all connections for the volume. > > Even without multiattach there are cases where you'll have multiple > connections for the same volume, like in a Live Migration. > > It's also very useful when Nova and Cinder get out of sync and your > volume has leftover connections. In this case if you try to delete the > volume you get a "volume in use" error from some drivers. > > Cheers, > Gorka. > > > > > So, my questions are: What is the best way to fix this problem? Should > > > Cinder API continue to > > > accept detachments with None connectors? If, so, what would be the > effects > > > on other Nova > > > attachments for the same volume? Is there any side effect if the > volume is > > > not multi-attached? > > > > > > Additionally to this thread here, I should bring this topic to > tomorrow's > > > Cinder's meeting, > > > so please join if you have something to share. > > > > > > > +1 - good plan. > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jul 18 16:14:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Jul 2018 11:14:26 -0500 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue Message-ID: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> As can be seen from logstash [1] this bug is hurting us pretty bad in the check queue. I thought I originally had this fixed with [2] but that turned out to only be part of the issue. I think I've identified the problem but I have failed to write a recreate regression test [3] because (I think) it's due to random ordering of which request spec we select to send to the scheduler during a multi-create request (and I tried making that predictable by sorting the instances by uuid in both conductor and the scheduler but that didn't make a difference in my test). I started with one fix yesterday [4] but that would regress an earlier fix for resizing servers to the same host which are in an anti-affinity group. If we went that route, it will involve changes to how we handle RequestSpec.num_instances (either not persist it, or reset it during move operations). After talking with Sean Mooney, we have another fix which is self-contained to the scheduler [5] so we wouldn't need to make any changes to the RequestSpec handling in conductor. It's admittedly a bit hairy, so I'm asking for some eyes on it since either way we go, we should get going soon before we hit the FF and RC1 rush which *always* kills the gate. [1] http://status.openstack.org/elastic-recheck/index.html#1781710 [2] https://review.openstack.org/#/c/582976/ [3] https://review.openstack.org/#/c/583339 [4] https://review.openstack.org/#/c/583351 [5] https://review.openstack.org/#/c/583347 -- Thanks, Matt From jaypipes at gmail.com Wed Jul 18 17:34:27 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Jul 2018 13:34:27 -0400 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <54887540-1350-708f-dcb4-0dec4bfac7b3@redhat.com> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <54887540-1350-708f-dcb4-0dec4bfac7b3@redhat.com> Message-ID: On 07/18/2018 12:42 AM, Ian Wienand wrote: > The ideal is that a (say) Neutron dev gets a clear traceback from a > standard Python error in their change and happily fixes it. The > reality is probably more like this developer gets a tempest > failure due to nova failing to boot a cirros image, stemming from a > detached volume due to a qemu bug that manifests due to a libvirt > update (I'm exaggerating, I know :). Not really exaggerating. :) -jay From chris.friesen at windriver.com Wed Jul 18 18:05:13 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Jul 2018 12:05:13 -0600 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> Message-ID: <5B4F8159.9010301@windriver.com> On 07/18/2018 10:14 AM, Matt Riedemann wrote: > As can be seen from logstash [1] this bug is hurting us pretty bad in the check > queue. > > I thought I originally had this fixed with [2] but that turned out to only be > part of the issue. > > I think I've identified the problem but I have failed to write a recreate > regression test [3] because (I think) it's due to random ordering of which > request spec we select to send to the scheduler during a multi-create request > (and I tried making that predictable by sorting the instances by uuid in both > conductor and the scheduler but that didn't make a difference in my test). Can we get rid of multi-create? It keeps causing complications, and it already has weird behaviour if you ask for min_count=X and max_count=Y and only X instances can be scheduled. (Currently it fails with NoValidHost, but it should arguably start up X instances.) > After talking with Sean Mooney, we have another fix which is self-contained to > the scheduler [5] so we wouldn't need to make any changes to the RequestSpec > handling in conductor. It's admittedly a bit hairy, so I'm asking for some eyes > on it since either way we go, we should get going soon before we hit the FF and > RC1 rush which *always* kills the gate. One of your options mentioned using RequestSpec.num_instances to decide if it's in a multi-create. Is there any reason to persist RequestSpec.num_instances? It seems like it's only applicable to the initial request, since after that each instance is managed individually. Chris From cboylan at sapwetik.org Wed Jul 18 18:11:17 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 18 Jul 2018 11:11:17 -0700 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> Message-ID: <1531937477.2427185.1445179376.2B973B6E@webmail.messagingengine.com> On Thu, Jul 12, 2018, at 1:38 PM, Thomas Goirand wrote: > Hi everyone! > > It's yet another of these emails where I'm going to complain out of > frustration because of OpenStack having bugs when running with the > newest stuff... Sorry in advance ! :) > > tl;dr: It's urgent, we need Python 3.7 uwsgi + SSL gate jobs. > > Longer version: > > When Python 3.6 reached Debian, i already forwarded a few patches. It > went quite ok, but still... When switching services to Python 3 for > Newton, I discover that many services still had issues with uwsgi / > mod_wsgi, and I spent a large amount of time trying to figure out ways > to fix the situation. Some patches are still not yet merged, even though > it was a community goal to have this support for Newton: > > Neutron: > https://review.openstack.org/#/c/555608/ > https://review.openstack.org/#/c/580049/ > > Neutron FWaaS: > https://review.openstack.org/#/c/580327/ > https://review.openstack.org/#/c/579433/ > > Horizon tempest plugin: > https://review.openstack.org/#/c/575714/ > > Oslotet (clearly, the -1 is for someone considering only Devstack / > venv, not understanding packaging environment): > https://review.openstack.org/#/c/571962/ > > Designate: > As much as I know, it still doesn't support uwsgi / mod_wsgi (please let > me know if this changed recently). > > There may be more, I didn't have much time investigating some projects > which are less important to me. > > Now, both Debian and Ubuntu have Python 3.7. Every package which I > upload in Sid need to support that. Yet, OpenStack's CI is still lagging > with Python 3.5. And there's lots of things currently broken. We've > fixed most "async" stuff, though we are failing to rebuild > oslo.messaging (from Queens) with Python 3.7: unit tests are just > hanging doing nothing. > > I'm very happy to do small contributions to each and every component > here and there whenever it's possible, but this time, it's becoming a > little bit frustrating. I sometimes even got replies like "hum ... > OpenStack only supports Python 3.5" a few times. That's not really > acceptable, unfortunately. > > So moving forward, what I think needs to happen is: > > - Get each and every project to actually gate using uwsgi for the API, > using both Python 3 and SSL (any other test environment is *NOT* a real > production environment). > > - The gating has to happen with whatever is the latest Python 3 version > available. Best would even be if we could have that *BEFORE* it reaches > distributions like Debian and Ubuntu. I'm aware that there's been some > attempts in the OpenStack infra to have Debian Sid (which is probably > the distribution getting the updates the faster). This effort needs to > be restarted, and some (non-voting ?) gate jobs needs to be setup using > whatever the latest thing is. If it cannot happen with Sid, then I don't > know, choose another platform, and do the Python 3-latest gating... When you asked about this last month I suggested Tumbleweed as an option. You get rolling release packages that are almost always up to date. I'd still suggest that now as a place to start. http://lists.openstack.org/pipermail/openstack-dev/2018-June/131302.html > > The current situation with the gate still doing Python 3.5 only jobs is > just not sustainable anymore. Moving forward, Python 2.7 will die. When > this happens, moving faster with Python 3 versions will be mandatory for > everyone, not only for fools like me who made the switch early. > > :) > > Cheers, > > Thomas Goirand (zigo) > > P.S: A big thanks to everyone who where helpful for making the switch to > Python 3 in Debian, especially Annp and the rest of the Neutron team. From melwittt at gmail.com Wed Jul 18 18:13:58 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 18 Jul 2018 11:13:58 -0700 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <5B4F8159.9010301@windriver.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> Message-ID: <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> On Wed, 18 Jul 2018 12:05:13 -0600, Chris Friesen wrote: > On 07/18/2018 10:14 AM, Matt Riedemann wrote: >> As can be seen from logstash [1] this bug is hurting us pretty bad in the check >> queue. >> >> I thought I originally had this fixed with [2] but that turned out to only be >> part of the issue. >> >> I think I've identified the problem but I have failed to write a recreate >> regression test [3] because (I think) it's due to random ordering of which >> request spec we select to send to the scheduler during a multi-create request >> (and I tried making that predictable by sorting the instances by uuid in both >> conductor and the scheduler but that didn't make a difference in my test). > > Can we get rid of multi-create? It keeps causing complications, and it already > has weird behaviour if you ask for min_count=X and max_count=Y and only X > instances can be scheduled. (Currently it fails with NoValidHost, but it should > arguably start up X instances.) We've discussed that before but I think users do use it and appreciate the ability to boot instances in batches (one request). The behavior you describe could be changed with a microversion, though I'm not sure if that would mean we have to preserve old behavior with the previous microversion. >> After talking with Sean Mooney, we have another fix which is self-contained to >> the scheduler [5] so we wouldn't need to make any changes to the RequestSpec >> handling in conductor. It's admittedly a bit hairy, so I'm asking for some eyes >> on it since either way we go, we should get going soon before we hit the FF and >> RC1 rush which *always* kills the gate. > > One of your options mentioned using RequestSpec.num_instances to decide if it's > in a multi-create. Is there any reason to persist RequestSpec.num_instances? > It seems like it's only applicable to the initial request, since after that each > instance is managed individually. > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Wed Jul 18 18:22:05 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 18 Jul 2018 13:22:05 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 9 July 2018 In-Reply-To: <1531506798.2175801.1440016944.59C6ACD5@webmail.messagingengine.com> References: <1531506798.2175801.1440016944.59C6ACD5@webmail.messagingengine.com> Message-ID: <4b068c14-7b9a-ab8f-779a-e4cb3eeeefb2@gmail.com> On 07/13/2018 01:33 PM, Colleen Murphy wrote: > # Keystone Team Update - Week of 9 July 2018 > > ## News > > ### New Core Reviewer > > We added a new core reviewer[1]: thanks to XiYuan for stepping up to take this responsibility and for all your hard work on keystone! > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132123.html > > ### Release Status > > This week is our scheduled feature freeze week, but we did not have quite the tumult of activity we had at feature freeze last cycle. We're pushing the auth receipts work until after the token model refactor is finished[2], to avoid the receipts model having to carry extra technical debt. The fine-grained access control feature for application credentials is also going to need to be pushed to next cycle when more of us can dedicate time to helping with it it[3]. The base work for default roles was completed[4] but the auditing of the keystone API hasn't been completed yet and is partly dependent on the flask work, so it is going to continue on into next cycle[5]. The hierarchical limits work is pretty solid but we're (likely) going to let it slide into next week so that some of the interface details can be worked out[6]. > > [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-10.log.html#t2018-07-10T01:39:27 > [3] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:19:08 > [4] https://review.openstack.org/572243 > [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-07-13.log.html#t2018-07-13T14:02:03 > [6] https://review.openstack.org/557696 > > ### PTG Planning > > We're starting to prepare topics for the next PTG in Denver[7] so please add topics to the planning etherpad[8]. > > [7] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132144.html > [8] https://etherpad.openstack.org/p/keystone-stein-ptg > > ## Recently Merged Changes > > Search query: https://bit.ly/2IACk3F > > We merged 20 changes this week, including several of the flask conversion patches. > > ## Changes that need Attention > > Search query: https://bit.ly/2wv7QLK > > There are 62 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. The major efforts to focus on are the token model refactor[9], the flaskification work[10], and the hierarchical project limits work[11]. > > [9] https://review.openstack.org/#/q/is:open+topic:bug/1778945 > [10] https://review.openstack.org/#/q/is:open+topic:bug/1776504 > [11] https://review.openstack.org/#/q/is:open+topic:bp/strict-two-level-model > > ## Bugs > > This week we opened 3 new bugs and closed 4. > > Bugs opened (3) > Bug #1780532 (keystone:Undecided) opened by zheng yan https://bugs.launchpad.net/keystone/+bug/1780532 > Bug #1780896 (keystone:Undecided) opened by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1780896 > Bug #1781536 (keystone:Undecided) opened by Pawan Gupta https://bugs.launchpad.net/keystone/+bug/1781536 > > Bugs closed (0) > > Bugs fixed (4) > Bug #1765193 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1765193 > Bug #1780159 (keystone:Medium) fixed by Sami Makki https://bugs.launchpad.net/keystone/+bug/1780159 > Bug #1780896 (keystone:Undecided) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1780896 > Bug #1779172 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1779172 > > ## Milestone Outlook > > https://releases.openstack.org/rocky/schedule.html > > This week is our scheduled feature freeze. We are likely going to make an extension for the hierarchical project limits work, pending discussion on the mailing list. > > Next week is the non-client final release date[12], so work happening in keystoneauth, keystonemiddleware, and our oslo libraries needs to be finished and reviewed prior to next Thursday so a release can be requested in time. I've starred some reviews that I think we should land before Thursday if possible [0]. Eyes there would be appreciated. Morgan also reported a bug that he is working on fixing in keystonemiddleware that we should try an include as well [1]. I'll add the patch to the query as soon as a review is proposed to gerrit. [0] https://review.openstack.org/#/q/starredby:lbragstad%2540gmail.com+status:open [1] https://bugs.launchpad.net/keystonemiddleware/+bug/1782404 > > [12] https://review.openstack.org/572243 > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter > Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From emilien at redhat.com Wed Jul 18 20:07:08 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 18 Jul 2018 16:07:08 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> Message-ID: Thanks everyone for this useful feedback (I guess it helps a lot to discuss before the PTG, so we don't even need to spend too much time on this topic). 1) Everyone agrees that undercloud HA isn't something we target now, therefore we won't switch to Pacemaker by default. 2) Pacemaker would still be a good option for multinode/HA standalone deployments, like we do for the overcloud. 3) Investigate how we could replace keepalived by something which would handle the VIPs used by HAproxy. I've abandoned the patches that tested Pacemaker on the undercloud, and also the patch in tripleoclient for enable_pacemaker parameter, I think we don't need it for now. There is another way to enable Pacemaker for Standalone. I also closed the blueprint: https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default and created a new one: https://blueprints.launchpad.net/tripleo/+spec/replace-keepalived-undercloud Please take a look and let me know what you think. It fits well with the Simplicity theme for Stein, and it'll help to remove services that we don't need anymore. If any feedback on this summary, please go ahead and comment. Thanks, On Wed, Jul 18, 2018 at 11:07 AM Dan Prince wrote: > On Tue, 2018-07-17 at 22:00 +0200, Michele Baldessari wrote: > > Hi Jarda, > > > > thanks for these perspectives, this is very valuable! > > > > On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote: > > > Not rooting for any approach here, just want to add a bit of > > > factors which might play a role when deciding which way to go: > > > > > > A) Performance matters, we should be improving simplicity and speed > > > of > > > deployments rather than making it heavier. If the deployment time > > > and > > > resource consumption is not significantly higher, I think it > > > doesn’t > > > cause an issue. But if there is a significant difference between > > > PCMK > > > and keepalived architecture, we would need to review that. > > > > +1 Should the pcmk take substantially more time then I agree, not > > worth > > defaulting to it. Worth also exploring how we could tweak things > > to make the setup of the cluster a bit faster (on a single node we > > can > > lower certain wait operations) but full agreement on this point. > > > > > B) Containerization of PCMK plans - eventually we would like to run > > > the whole undercloud/overcloud on minimal OS in containers to keep > > > improving the operations on the nodes (updates/upgrades/etc). If > > > because PCMK we would be forever stuck on BM, it would be a bit of > > > pita. As Michele said, maybe we can re-visit this. > > > > So I briefly discussed this in our team, and while it could be > > re-explored, we need to be very careful about the tradeoffs. > > This would be another layer which would bring quite a bit of > > complexity > > (pcs commands would have to be run inside a container, speed > > tradeoffs, > > more limited possibilities when it comes to upgrading/updating, etc.) > > > > > C) Unification of undercloud/overcloud is important for us, so +1 > > > to > > > whichever method is being used in both. But what I know, HA folks > > > went > > > to keepalived since it is simpler so would be good to keep in sync > > > (and good we have their presence here actually) :) > > > > Right so to be honest, the choice of keepalived on the undercloud for > > VIP predates me and I was not directly involved, so I lack the exact > > background for that choice (and I could not quickly reconstruct it > > from git > > history). But I think it is/was a reasonable choice for what it needs > > doing, although I probably would have picked just configuring the > > extra > > VIPs on the interfaces and have one service less to care about. > > +1 in general on the unification, with the caveats that have been > > discussed so far. > > I think it was more of that we wanted to use HAProxy for SSL > termination and keepalived is a simple enough way to set this up. > Instack-Undercloud has used HAProxy/keepalived for years in this > manner. > > I think this came up recently because downstream we did not have a > keepalived container. So it got a bit of spotlight on it as to why we > were using it. We do have a keepalived RPM and its worked as it has for > years already so as far as single node/undercloud setups go I think it > would continue to work fine. Kolla has had and supports the keepalived > container for awhile now as well. > > --- > > Comments on this thread seem to cover 2 main themes to me. > Simplification and the desire to use the same architecture as the > Overcloud (Pacemaker). And there is some competition between them. > > For simplification: If we can eliminate keepalived and still use > HAProxy (thus keeping the SSL termination features working) then I > think that would be worth trying. Specifically can we eliminate > Keepalived without swapping in Pacemaker? Michele: if you have ideas > here lets try them! > > With regards to Pacemaker I think we need to make an exception. It > seems way too heavy for single node setups and increases the complexity > there for very little benefit. To me the shared architecture for > TripleO is the tools we use to setup services. By using t-h-t to drive > our setup of the Undercloud and All-In-One installers we are already > gaining a lot of benefit here. Pacemaker is weird as it is kind of > augments the architecture a bit (HA architecture). But Pacemaker is > also a service that gets configured by TripleO. So it kind of falls > into both categories. Pacemaker gives us features we need in the > Overcloud at the cost of some extra complexity. And in addition to all > this we are still running the Pacemaker processes themselves on > baremetal. All this just to say we are running the same "architecture" > on both the Undercloud and Overcloud? I'm not a fan. > > Dan > > > > > > > > D) Undercloud HA is a nice have which I think we want to get to one > > > day, but it is not in as big demand as for example edge > > > deployments, > > > BM provisioning with pure OS, or multiple envs managed by single > > > undercloud. So even though undercloud HA is important, it won’t > > > bring > > > operators as many benefits as the previously mentioned > > > improvements. > > > Let’s keep it in mind when we are considering the amount of work > > > needed for it. > > > > +100 > > > > > E) One of the use-cases we want to take into account is expanind a > > > single-node deployment (all-in-one) to 3 node HA controller. I > > > think > > > it is important when evaluating PCMK/keepalived > > > > Right, so to be able to implement this, there is no way around having > > pacemaker (at least today until we have galera and rabbit). > > It still does not mean we have to default to it, but if you want to > > scale beyond one node, then there is no other option atm. > > > > > HTH > > > > It did, thanks! > > > > Michele > > > — Jarda > > > > > > > On Jul 17, 2018, at 05:04, Emilien Macchi > > > > wrote: > > > > > > > > Thanks everyone for the feedback, I've made a quick PoC: > > > > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-de > > > > fault > > > > > > > > And I'm currently doing local testing. I'll publish results when > > > > progress is made, but I've made it so we have the choice to > > > > enable pacemaker (disabled by default), where keepalived would > > > > remain the default for now. > > > > > > > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari > > > n.org> wrote: > > > > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > > > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince > > > > > wrote: > > > > > [...] > > > > > > > > > > > The biggest downside IMO is the fact that our Pacemaker > > > > > > integration is > > > > > > not containerized. Nor are there any plans to finish the > > > > > > containerization of it. Pacemaker has to currently run on > > > > > > baremetal > > > > > > and this makes the installation of it for small dev/test > > > > > > setups a lot > > > > > > less desirable. It can launch containers just fine but the > > > > > > pacemaker > > > > > > installation itself is what concerns me for the long term. > > > > > > > > > > > > Until we have plans for containizing it I suppose I would > > > > > > rather see > > > > > > us keep keepalived as an option for these smaller setups. We > > > > > > can > > > > > > certainly change our default Undercloud to use Pacemaker (if > > > > > > we choose > > > > > > to do so). But having keepalived around for "lightweight" > > > > > > (zero or low > > > > > > footprint) installs that work is really quite desirable. > > > > > > > > > > > > > > > > That's a good point, and I agree with your proposal. > > > > > Michele, what's the long term plan regarding containerized > > > > > pacemaker? > > > > > > > > Well, we kind of started evaluating it (there was definitely not > > > > enough > > > > time around pike/queens as we were busy landing the bundles > > > > code), then > > > > due to discussions around k8s it kind of got off our radar. We > > > > can > > > > at least resume the discussions around it and see how much effort > > > > it > > > > would be. I'll bring it up with my team and get back to you. > > > > > > > > cheers, > > > > Michele > > > > -- > > > > Michele Baldessari > > > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > > > > > _________________________________________________________________ > > > > _________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > > subscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > > > Emilien Macchi > > > > _________________________________________________________________ > > > > _________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > > subscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > ___________________________________________________________________ > > > _______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > > bscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jul 18 20:14:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Jul 2018 15:14:55 -0500 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> Message-ID: <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> On 7/18/2018 1:13 PM, melanie witt wrote: >> >> Can we get rid of multi-create?  It keeps causing complications, and >> it already >> has weird behaviour if you ask for min_count=X and max_count=Y and only X >> instances can be scheduled.  (Currently it fails with NoValidHost, but >> it should >> arguably start up X instances.) > > We've discussed that before but I think users do use it and appreciate > the ability to boot instances in batches (one request). The behavior you > describe could be changed with a microversion, though I'm not sure if > that would mean we have to preserve old behavior with the previous > microversion. Correct, we can't just remove it since that's a backward incompatible microversion change. Plus, NFV people *love* it. > >>> After talking with Sean Mooney, we have another fix which is >>> self-contained to >>> the scheduler [5] so we wouldn't need to make any changes to the >>> RequestSpec >>> handling in conductor. It's admittedly a bit hairy, so I'm asking for >>> some eyes >>> on it since either way we go, we should get going soon before we hit >>> the FF and >>> RC1 rush which *always* kills the gate. >> >> One of your options mentioned using RequestSpec.num_instances to >> decide if it's >> in a multi-create.  Is there any reason to persist >> RequestSpec.num_instances? >> It seems like it's only applicable to the initial request, since after >> that each >> instance is managed individually. Yes, I agree RequestSpec.num_instances is something we shouldn't persist since it's only applicable to the initial server create (you can't multi-migrate a group of instances, for example - but I'm sure people have asked for that at some point), and it should be set per call to the scheduler, but that's a wider-ranging change since it would touch several parts of conductor, plus the request spec, plus the ServerGroupAntiAffinitySchedulerFilter. Honestly I'm OK with doing either, and I don't think they are mutually exclusive things, so we could make num_instances a per-request thing in the future for sanity reasons. -- Thanks, Matt From michele at acksyn.org Wed Jul 18 20:36:23 2018 From: michele at acksyn.org (Michele Baldessari) Date: Wed, 18 Jul 2018 22:36:23 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> Message-ID: <20180718203623.GA4106@palahniuk.int.rhx> On Wed, Jul 18, 2018 at 11:07:04AM -0400, Dan Prince wrote: > On Tue, 2018-07-17 at 22:00 +0200, Michele Baldessari wrote: > > Hi Jarda, > > > > thanks for these perspectives, this is very valuable! > > > > On Tue, Jul 17, 2018 at 06:01:21PM +0200, Jaromir Coufal wrote: > > > Not rooting for any approach here, just want to add a bit of > > > factors which might play a role when deciding which way to go: > > > > > > A) Performance matters, we should be improving simplicity and speed > > > of > > > deployments rather than making it heavier. If the deployment time > > > and > > > resource consumption is not significantly higher, I think it > > > doesn’t > > > cause an issue. But if there is a significant difference between > > > PCMK > > > and keepalived architecture, we would need to review that. > > > > +1 Should the pcmk take substantially more time then I agree, not > > worth > > defaulting to it. Worth also exploring how we could tweak things > > to make the setup of the cluster a bit faster (on a single node we > > can > > lower certain wait operations) but full agreement on this point. > > > > > B) Containerization of PCMK plans - eventually we would like to run > > > the whole undercloud/overcloud on minimal OS in containers to keep > > > improving the operations on the nodes (updates/upgrades/etc). If > > > because PCMK we would be forever stuck on BM, it would be a bit of > > > pita. As Michele said, maybe we can re-visit this. > > > > So I briefly discussed this in our team, and while it could be > > re-explored, we need to be very careful about the tradeoffs. > > This would be another layer which would bring quite a bit of > > complexity > > (pcs commands would have to be run inside a container, speed > > tradeoffs, > > more limited possibilities when it comes to upgrading/updating, etc.) > > > > > C) Unification of undercloud/overcloud is important for us, so +1 > > > to > > > whichever method is being used in both. But what I know, HA folks > > > went > > > to keepalived since it is simpler so would be good to keep in sync > > > (and good we have their presence here actually) :) > > > > Right so to be honest, the choice of keepalived on the undercloud for > > VIP predates me and I was not directly involved, so I lack the exact > > background for that choice (and I could not quickly reconstruct it > > from git > > history). But I think it is/was a reasonable choice for what it needs > > doing, although I probably would have picked just configuring the > > extra > > VIPs on the interfaces and have one service less to care about. > > +1 in general on the unification, with the caveats that have been > > discussed so far. > > I think it was more of that we wanted to use HAProxy for SSL > termination and keepalived is a simple enough way to set this up. > Instack-Undercloud has used HAProxy/keepalived for years in this > manner. > > I think this came up recently because downstream we did not have a > keepalived container. So it got a bit of spotlight on it as to why we > were using it. We do have a keepalived RPM and its worked as it has for > years already so as far as single node/undercloud setups go I think it > would continue to work fine. Kolla has had and supports the keepalived > container for awhile now as well. > > --- > > Comments on this thread seem to cover 2 main themes to me. > Simplification and the desire to use the same architecture as the > Overcloud (Pacemaker). And there is some competition between them. > > For simplification: If we can eliminate keepalived and still use > HAProxy (thus keeping the SSL termination features working) then I > think that would be worth trying. Specifically can we eliminate > Keepalived without swapping in Pacemaker? Michele: if you have ideas > here lets try them! I don't think it makes a lot of sense to just move to native IPs on interfaces just to remove keepalived. At least I don't see a good trade-off. If it has worked so far, I'd say let's just keep it (unless there are compelling arguments to remove it, of course) > With regards to Pacemaker I think we need to make an exception. It > seems way too heavy for single node setups and increases the complexity > there for very little benefit. > To me the shared architecture for > TripleO is the tools we use to setup services. By using t-h-t to drive > our setup of the Undercloud and All-In-One installers we are already > gaining a lot of benefit here. Pacemaker is weird as it is kind of > augments the architecture a bit (HA architecture). But Pacemaker is > also a service that gets configured by TripleO. So it kind of falls > into both categories. Pacemaker gives us features we need in the > Overcloud at the cost of some extra complexity. And in addition to all > this we are still running the Pacemaker processes themselves on > baremetal. All this just to say we are running the same "architecture" > on both the Undercloud and Overcloud? I'm not a fan. Fully agreed on the extra complexity, I think it is a matter of trade-offs. The only use case mentioned by Jarda where I don't think we can realistically get away without pcmk, is E). If we care enough about that we should allow it to be configured in the undercloud/all-in-one (maybe not as a default?), if we do not care about that use case (or we come up with some other clever ideas on how to achieve it), then that is one item off the list. Besides E), I think a reasonable use case is to be able to have a small all-in-one installation that mimicks a more "real-world" overcloud. I think there is a bit of value in that, as long as the code to make it happen is not horribly huge and complex (and I was under the impression from Emilien's patchset that this is not the case) After this discussion, my personal take is that offering pcmk as an option (disabled by default) is something we should at least consider, but I also won't be all too sad if we decide not to do it (it is always something we can easily revisit later after all ;) > > > > > > > D) Undercloud HA is a nice have which I think we want to get to one > > > day, but it is not in as big demand as for example edge > > > deployments, > > > BM provisioning with pure OS, or multiple envs managed by single > > > undercloud. So even though undercloud HA is important, it won’t > > > bring > > > operators as many benefits as the previously mentioned > > > improvements. > > > Let’s keep it in mind when we are considering the amount of work > > > needed for it. > > > > +100 > > > > > E) One of the use-cases we want to take into account is expanind a > > > single-node deployment (all-in-one) to 3 node HA controller. I > > > think > > > it is important when evaluating PCMK/keepalived > > > > Right, so to be able to implement this, there is no way around having > > pacemaker (at least today until we have galera and rabbit). > > It still does not mean we have to default to it, but if you want to > > scale beyond one node, then there is no other option atm. > > > > > HTH > > > > It did, thanks! > > > > Michele > > > — Jarda > > > > > > > On Jul 17, 2018, at 05:04, Emilien Macchi > > > > wrote: > > > > > > > > Thanks everyone for the feedback, I've made a quick PoC: > > > > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-de > > > > fault > > > > > > > > And I'm currently doing local testing. I'll publish results when > > > > progress is made, but I've made it so we have the choice to > > > > enable pacemaker (disabled by default), where keepalived would > > > > remain the default for now. > > > > > > > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari > > > n.org> wrote: > > > > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote: > > > > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince > > > > > wrote: > > > > > [...] > > > > > > > > > > > The biggest downside IMO is the fact that our Pacemaker > > > > > > integration is > > > > > > not containerized. Nor are there any plans to finish the > > > > > > containerization of it. Pacemaker has to currently run on > > > > > > baremetal > > > > > > and this makes the installation of it for small dev/test > > > > > > setups a lot > > > > > > less desirable. It can launch containers just fine but the > > > > > > pacemaker > > > > > > installation itself is what concerns me for the long term. > > > > > > > > > > > > Until we have plans for containizing it I suppose I would > > > > > > rather see > > > > > > us keep keepalived as an option for these smaller setups. We > > > > > > can > > > > > > certainly change our default Undercloud to use Pacemaker (if > > > > > > we choose > > > > > > to do so). But having keepalived around for "lightweight" > > > > > > (zero or low > > > > > > footprint) installs that work is really quite desirable. > > > > > > > > > > > > > > > > That's a good point, and I agree with your proposal. > > > > > Michele, what's the long term plan regarding containerized > > > > > pacemaker? > > > > > > > > Well, we kind of started evaluating it (there was definitely not > > > > enough > > > > time around pike/queens as we were busy landing the bundles > > > > code), then > > > > due to discussions around k8s it kind of got off our radar. We > > > > can > > > > at least resume the discussions around it and see how much effort > > > > it > > > > would be. I'll bring it up with my team and get back to you. > > > > > > > > cheers, > > > > Michele > > > > -- > > > > Michele Baldessari > > > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > > > > > _________________________________________________________________ > > > > _________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > > subscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > > > Emilien Macchi > > > > _________________________________________________________________ > > > > _________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > > subscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > ___________________________________________________________________ > > > _______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > > bscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From melwittt at gmail.com Wed Jul 18 21:43:40 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 18 Jul 2018 14:43:40 -0700 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> Message-ID: <33560712-10d1-2310-4333-afd4644f1e8f@gmail.com> On Wed, 18 Jul 2018 15:14:55 -0500, Matt Riedemann wrote: > On 7/18/2018 1:13 PM, melanie witt wrote: >>> Can we get rid of multi-create?  It keeps causing complications, and >>> it already >>> has weird behaviour if you ask for min_count=X and max_count=Y and only X >>> instances can be scheduled.  (Currently it fails with NoValidHost, but >>> it should >>> arguably start up X instances.) >> We've discussed that before but I think users do use it and appreciate >> the ability to boot instances in batches (one request). The behavior you >> describe could be changed with a microversion, though I'm not sure if >> that would mean we have to preserve old behavior with the previous >> microversion. > Correct, we can't just remove it since that's a backward incompatible > microversion change. Plus, NFV people*love* it. Sorry, I think I might have caused confusion with my question about a microversion. I was saying that to change the min_count=X and max_count=Y behavior of raising NoValidHost if X can be satisfied but Y can't, I thought we could change that in a microversion. And I wasn't sure if that would also mean we would have to keep the old behavior for previous microversions (and thus maintain both behaviors). -melanie From work at seanmooney.info Wed Jul 18 22:58:00 2018 From: work at seanmooney.info (work at seanmooney.info) Date: Wed, 18 Jul 2018 23:58:00 +0100 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> Message-ID: <24a813346fa789d737f8e95394e1f47cbfdfdf1e.camel@seanmooney.info> On Wed, 2018-07-18 at 15:14 -0500, Matt Riedemann wrote: > On 7/18/2018 1:13 PM, melanie witt wrote: > > > > > > Can we get rid of multi-create? It keeps causing complications, > > > and > > > it already > > > has weird behaviour if you ask for min_count=X and max_count=Y > > > and only X > > > instances can be scheduled. (Currently it fails with > > > NoValidHost, but > > > it should > > > arguably start up X instances.) > > > > We've discussed that before but I think users do use it and > > appreciate > > the ability to boot instances in batches (one request). The > > behavior you > > describe could be changed with a microversion, though I'm not sure > > if > > that would mean we have to preserve old behavior with the previous > > microversion. > > Correct, we can't just remove it since that's a backward > incompatible > microversion change. Plus, NFV people *love* it. do they? alot of nfv folks use heat,osm or onap to drive there deployments. im not sure if any of thoes actully use the multi create support. but yes people proably do use it. > > > > > > > After talking with Sean Mooney, we have another fix which is > > > > self-contained to > > > > the scheduler [5] so we wouldn't need to make any changes to > > > > the > > > > RequestSpec > > > > handling in conductor. It's admittedly a bit hairy, so I'm > > > > asking for > > > > some eyes > > > > on it since either way we go, we should get going soon before > > > > we hit > > > > the FF and > > > > RC1 rush which *always* kills the gate. > > > > > > One of your options mentioned using RequestSpec.num_instances to > > > decide if it's > > > in a multi-create. Is there any reason to persist > > > RequestSpec.num_instances? > > > It seems like it's only applicable to the initial request, since > > > after > > > that each > > > instance is managed individually. > > Yes, I agree RequestSpec.num_instances is something we shouldn't > persist > since it's only applicable to the initial server create (you can't > multi-migrate a group of instances, for example - but I'm sure > people > have asked for that at some point), and it should be set per call to > the > scheduler, but that's a wider-ranging change since it would touch > several parts of conductor, plus the request spec, plus the > ServerGroupAntiAffinitySchedulerFilter. i might be a little biased but i think the localised change in the schduler makes sense for now and we should clean this up in stine. general update. i spent some time this afternoon debuging matt's regression test https://review.openstack.org/#/c/583339 and it now works as intended with the addtion of disableing the late check on the compute node in the regression test to mimic devstack. matt has rebased https://review.openstack.org/#/c/583347 ontop of the regression test and its currently in the ci queue. hopefully that will pass soon. while the chage is less then ideal it is backportable downstream if needed where as the wider change would not be easily so that is a plus in the short term. > Honestly I'm OK with doing either, and I don't think they are > mutually > exclusive things, so we could make num_instances a per-request thing > in > the future for sanity reasons. From chris.friesen at windriver.com Wed Jul 18 23:01:23 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Jul 2018 17:01:23 -0600 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <33560712-10d1-2310-4333-afd4644f1e8f@gmail.com> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> <33560712-10d1-2310-4333-afd4644f1e8f@gmail.com> Message-ID: <5B4FC6C3.7000001@windriver.com> On 07/18/2018 03:43 PM, melanie witt wrote: > On Wed, 18 Jul 2018 15:14:55 -0500, Matt Riedemann wrote: >> On 7/18/2018 1:13 PM, melanie witt wrote: >>>> Can we get rid of multi-create? It keeps causing complications, and >>>> it already >>>> has weird behaviour if you ask for min_count=X and max_count=Y and only X >>>> instances can be scheduled. (Currently it fails with NoValidHost, but >>>> it should >>>> arguably start up X instances.) >>> We've discussed that before but I think users do use it and appreciate >>> the ability to boot instances in batches (one request). The behavior you >>> describe could be changed with a microversion, though I'm not sure if >>> that would mean we have to preserve old behavior with the previous >>> microversion. >> Correct, we can't just remove it since that's a backward incompatible >> microversion change. Plus, NFV people*love* it. > > Sorry, I think I might have caused confusion with my question about a > microversion. I was saying that to change the min_count=X and max_count=Y > behavior of raising NoValidHost if X can be satisfied but Y can't, I thought we > could change that in a microversion. And I wasn't sure if that would also mean > we would have to keep the old behavior for previous microversions (and thus > maintain both behaviors). I understood you. :) For the case where we could satisfy min_count but not max_count I think we *would* need to keep the existing kill-them-all behaviour for existing microversions since that's definitely an end-user-visible behaviour. Chris From tony at bakeyournoodle.com Wed Jul 18 23:46:27 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 19 Jul 2018 09:46:27 +1000 Subject: [openstack-dev] [tripleo] EOL process for newton branches Message-ID: <20180718234625.GA30070@thor.bakeyournoodle.com> Hi All, As of I3671f10d5a2fef0e91510a40835de962637f16e5 we have meta-data in openstack/releases that tells us that the following repos are at newton-eol: - openstack/instack-undercloud - openstack/os-net-config - openstack/puppet-tripleo - openstack/tripleo-common - openstack/tripleo-heat-templates I was setting up the request to create the tags and delete those branches but I noticed that the following repos have newton branches and are not in the list above: - openstack/instack - openstack/os-apply-config - openstack/os-collect-config - openstack/os-refresh-config - openstack/python-tripleoclient - openstack/tripleo-image-elements - openstack/tripleo-puppet-elements - openstack/tripleo-ui - openstack/tripleo-validations So I guess there are a couple of options here: 1) Just EOL the 5 repos that opensatck/releases knows are at EOL 2) EOL the repos from both lists ad update openstack/releases to flag them as such I feel like option 2 is the correct option but perhaps there is a reason those repos where not tagged and released Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From emilien at redhat.com Thu Jul 19 00:08:16 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 18 Jul 2018 20:08:16 -0400 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: <20180718234625.GA30070@thor.bakeyournoodle.com> References: <20180718234625.GA30070@thor.bakeyournoodle.com> Message-ID: Option 2, EOL everything. Thanks a lot for your help on this one, Tony. --- Emilien Macchi On Wed, Jul 18, 2018, 7:47 PM Tony Breeds, wrote: > > Hi All, > As of I3671f10d5a2fef0e91510a40835de962637f16e5 we have meta-data in > openstack/releases that tells us that the following repos are at > newton-eol: > - openstack/instack-undercloud > - openstack/os-net-config > - openstack/puppet-tripleo > - openstack/tripleo-common > - openstack/tripleo-heat-templates > > I was setting up the request to create the tags and delete those > branches but I noticed that the following repos have newton branches and > are not in the list above: > > - openstack/instack > - openstack/os-apply-config > - openstack/os-collect-config > - openstack/os-refresh-config > - openstack/python-tripleoclient > - openstack/tripleo-image-elements > - openstack/tripleo-puppet-elements > - openstack/tripleo-ui > - openstack/tripleo-validations > > So I guess there are a couple of options here: > > 1) Just EOL the 5 repos that opensatck/releases knows are at EOL > 2) EOL the repos from both lists ad update openstack/releases to flag > them as such > > I feel like option 2 is the correct option but perhaps there is a reason > those repos where not tagged and released > > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Jul 19 00:11:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Jul 2018 19:11:11 -0500 Subject: [openstack-dev] [nova] Bug 1781710 killing the check queue In-Reply-To: <24a813346fa789d737f8e95394e1f47cbfdfdf1e.camel@seanmooney.info> References: <01eae5c5-1cb3-ea44-1b2f-78e5049805e0@gmail.com> <5B4F8159.9010301@windriver.com> <192ee905-7fb4-80c1-9dfc-65a120de0526@gmail.com> <00dd9044-bdc3-06bb-a629-94e7a6e663d6@gmail.com> <24a813346fa789d737f8e95394e1f47cbfdfdf1e.camel@seanmooney.info> Message-ID: <9101ab38-6e73-00da-1749-ffc7108c2864@gmail.com> On 7/18/2018 5:58 PM, work at seanmooney.info wrote: > general update. > i spent some time this afternoon debuging matt's regression test > https://review.openstack.org/#/c/583339 > and it now works as intended with the addtion of disableing the late > check on the compute node in the regression test to mimic devstack. Sean, thank you again for figuring out the issue in the regression test, that helps a ton in asserting the fix (and it also showed I was missing a couple of things in the fix when I rebased on top of the test). > > matt has rebasedhttps://review.openstack.org/#/c/583347 ontop of > the regression test and its currently in the ci queue. > hopefully that will pass soon. > > while the chage is less then ideal it is backportable downstream if > needed where as the wider change would not be easily so that is a > plus in the short term. We don't have to backport this fix, it was a regression introduced in Rocky, so that's a good thing. But agree we can do more cleanups in Stein if we want to change how we handle RequestSpec.num_instances so it's not persisted and set per operation (or just not used at all in scheduling since we don't really need it anymore). -- Thanks, Matt From yjf1970231893 at gmail.com Thu Jul 19 02:51:58 2018 From: yjf1970231893 at gmail.com (Jeff Yang) Date: Thu, 19 Jul 2018 10:51:58 +0800 Subject: [openstack-dev] [octavia] Make amphora-agent support http rest api Message-ID: In some private cloud environments, the possibility of vm being attacked is very small, and all personnel are trusted. At this time, the administrator hopes to reduce the complexity of octavia deployment and operation and maintenance. We can let the amphora-agent provide the http api so that the administrator can ignore the issue of the certificate. https://storyboard.openstack.org/#!/story/2003027 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jul 19 04:55:21 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 19 Jul 2018 13:55:21 +0900 Subject: [openstack-dev] [nova][cinder][neutron][qa] Should we add a tempest-slow job? In-Reply-To: References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: <164b0e47bd1.ffcf65049382.182837544735706073@ghanshyammann.com> > On Sun, May 13, 2018 at 1:20 PM, Ghanshyam Mann wrote: > > On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann wrote: > >> The tempest-full job used to run API and scenario tests concurrently, and if > >> you go back far enough I think it also ran slow tests. > >> > >> Sometime in the last year or so, the full job was changed to run the > >> scenario tests in serial and exclude the slow tests altogether. So the API > >> tests run concurrently first, and then the scenario tests run in serial. > >> During that change, some other tests were identified as 'slow' and marked as > >> such, meaning they don't get run in the normal tempest-full job. > >> > >> There are some valuable scenario tests marked as slow, however, like the > >> only encrypted volume testing we have in tempest is marked slow so it > >> doesn't get run on every change for at least nova. > > > > Yes, basically slow tests were selected based on > > https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent > > gate failure for heavy tests mainly from ssh checks so we tried to > > mark more tests as slow. > > I agree that some of them are not really slow at least in today situation. > > > >> > >> There is only one job that can be run against nova changes which runs the > >> slow tests but it's in the experimental queue so people forget to run it. > > > > Tempest job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" > > run those slow tests including migration and LVM multibackend tests. > > This job runs on tempest check pipeline and experimental (as you > > mentioned) on nova and cinder [3]. We marked this as n-v to check its > > stability and now it is good to go as voting on tempest. > > > >> > >> As a test, I've proposed a nova-slow job [1] which only runs the slow tests > >> and only the compute API and scenario tests. Since there currently no > >> compute API tests marked as slow, it's really just running slow scenario > >> tests. Results show it runs 37 tests in about 37 minutes [2]. The overall > >> job runtime was 1 hour and 9 minutes, which is on average less than the > >> tempest-full job. The nova-slow job is also running scenarios that nova > >> patches don't actually care about, like the neutron IPv6 scenario tests. > >> > >> My question is, should we make this a generic tempest-slow job which can be > >> run either in the integrated-gate or at least in nova/neutron/cinder > >> consistently (I'm not sure if there are slow tests for just keystone or > >> glance)? I don't know if the other projects already have something like this > >> that they gate on. If so, a nova-specific job for nova changes is fine for > >> me. > > > > +1 on idea. As of now slow marked tests are from nova, cinder and > > neutron scenario tests and 2 API swift tests only [4]. I agree that > > making a generic job in tempest is better for maintainability. We can > > use existing job for that with below modification- > > - We can migrate > > "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job > > zuulv3 in tempest repo > > - We can see if we can move migration tests out of it and use > > "nova-live-migration" job (in tempest check pipeline ) which is much > > better in live migration env setup and controlled by nova. > > - then it can be name something like > > "tempest-scenario-multinode-lvm-multibackend". > > - run this job in nova, cinder, neutron check pipeline instead of experimental. > > Like this - https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job > > That makes scenario job as generic with running all scenario tests > including slow tests with concurrency 2. I made few cleanup and moved > live migration tests out of it which is being run by > 'nova-live-migration' job. Last patch making this job as voting on > tempest side. > > If looks good, we can use this to run on project side pipeline as voting. Update on this thread: Old Scenario job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" has been migrated to Tempest as new job named "tempest-scenario-all" job[1] Changes from old job to new job: - This new job will run all the scenario tests including slow with lvm multibackend. Same as old job - Executed the live migration API tests out of it. Live migration API tests runs on separate nova job "nova-live-migration". - This new job runs as voting on Tempest check and gate pipeline. This is ready to use for cross project also. i have pushed the patch to nova, neutron, cinder to use this new job[3] and remove "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" from project-config[4]. Let me know your feedback on proposed patches. [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n147 [2] https://review.openstack.org/#/q/topic:run-tempest-scenario-all-job+(status:open+OR+status:merged) [3] https://review.openstack.org/#/q/topic:run-tempest-scenario-all-job+(status:open+OR+status:merged) [4] https://review.openstack.org/#/q/topic:drop-legacy-scenario-job+(status:open+OR+status:merged) > > -gmann > > > > > Another update on slow tests is that we are trying the possibility of > > taking back the slow tests in tempest-full with new job > > "tempest-full-parallel" [5]. Currently this job is n-v and if > > everything works fine in this new job then, we can make tempest-full > > job to run the slow tests are it used to do previously. > > > >> > >> [1] https://review.openstack.org/#/c/567697/ > >> [2] > >> http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 > > > > ..3 http://codesearch.openstack.org/?q=legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend&i=nope&files=&repos= > > ..4 https://github.com/openstack/tempest/search?utf8=%E2%9C%93&q=%22type%3D%27slow%27%22&type= > > ..5 https://github.com/openstack/tempest/blob/9c628189e798f46de8c4b9484237f4d6dc6ade7e/.zuul.yaml#L48 > > > > > > -gmann > > > >> > >> -- > >> > >> Thanks, > >> > >> Matt > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Thu Jul 19 04:59:46 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 19 Jul 2018 14:59:46 +1000 Subject: [openstack-dev] [tripleo] EOL process for newton branches In-Reply-To: References: <20180718234625.GA30070@thor.bakeyournoodle.com> Message-ID: <20180719045945.GB30070@thor.bakeyournoodle.com> On Wed, Jul 18, 2018 at 08:08:16PM -0400, Emilien Macchi wrote: > Option 2, EOL everything. > Thanks a lot for your help on this one, Tony. No problem. I've created: https://review.openstack.org/583856 to tag final releases for tripleo deliverables and then mark them as EOL. Once that merges we can arrange for someone, with appropriate permissions to run: # EOL repos belonging to tripleo eol_branch.sh -- stable/newton newton-eol \ openstack/instack openstack/instack-undercloud \ openstack/os-apply-config openstack/os-collect-config \ openstack/os-net-config openstack/os-refresh-config \ openstack/puppet-tripleo openstack/python-tripleoclient \ openstack/tripleo-common openstack/tripleo-heat-templates \ openstack/tripleo-image-elements \ openstack/tripleo-puppet-elements openstack/tripleo-ui \ openstack/tripleo-validations Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Thu Jul 19 07:12:53 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 19 Jul 2018 09:12:53 +0200 Subject: [openstack-dev] [nova][cinder][neutron][qa] Should we add a tempest-slow job? In-Reply-To: <164b0e47bd1.ffcf65049382.182837544735706073@ghanshyammann.com> References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> <164b0e47bd1.ffcf65049382.182837544735706073@ghanshyammann.com> Message-ID: Hi, Thanks. I just send patch [1] to add this new job to Neutron failure rate Grafana dashboard. [1] https://review.openstack.org/#/c/583870/ > Wiadomość napisana przez Ghanshyam Mann w dniu 19.07.2018, o godz. 06:55: > >> On Sun, May 13, 2018 at 1:20 PM, Ghanshyam Mann wrote: >>> On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann wrote: >>>> The tempest-full job used to run API and scenario tests concurrently, and if >>>> you go back far enough I think it also ran slow tests. >>>> >>>> Sometime in the last year or so, the full job was changed to run the >>>> scenario tests in serial and exclude the slow tests altogether. So the API >>>> tests run concurrently first, and then the scenario tests run in serial. >>>> During that change, some other tests were identified as 'slow' and marked as >>>> such, meaning they don't get run in the normal tempest-full job. >>>> >>>> There are some valuable scenario tests marked as slow, however, like the >>>> only encrypted volume testing we have in tempest is marked slow so it >>>> doesn't get run on every change for at least nova. >>> >>> Yes, basically slow tests were selected based on >>> https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent >>> gate failure for heavy tests mainly from ssh checks so we tried to >>> mark more tests as slow. >>> I agree that some of them are not really slow at least in today situation. >>> >>>> >>>> There is only one job that can be run against nova changes which runs the >>>> slow tests but it's in the experimental queue so people forget to run it. >>> >>> Tempest job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" >>> run those slow tests including migration and LVM multibackend tests. >>> This job runs on tempest check pipeline and experimental (as you >>> mentioned) on nova and cinder [3]. We marked this as n-v to check its >>> stability and now it is good to go as voting on tempest. >>> >>>> >>>> As a test, I've proposed a nova-slow job [1] which only runs the slow tests >>>> and only the compute API and scenario tests. Since there currently no >>>> compute API tests marked as slow, it's really just running slow scenario >>>> tests. Results show it runs 37 tests in about 37 minutes [2]. The overall >>>> job runtime was 1 hour and 9 minutes, which is on average less than the >>>> tempest-full job. The nova-slow job is also running scenarios that nova >>>> patches don't actually care about, like the neutron IPv6 scenario tests. >>>> >>>> My question is, should we make this a generic tempest-slow job which can be >>>> run either in the integrated-gate or at least in nova/neutron/cinder >>>> consistently (I'm not sure if there are slow tests for just keystone or >>>> glance)? I don't know if the other projects already have something like this >>>> that they gate on. If so, a nova-specific job for nova changes is fine for >>>> me. >>> >>> +1 on idea. As of now slow marked tests are from nova, cinder and >>> neutron scenario tests and 2 API swift tests only [4]. I agree that >>> making a generic job in tempest is better for maintainability. We can >>> use existing job for that with below modification- >>> - We can migrate >>> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job >>> zuulv3 in tempest repo >>> - We can see if we can move migration tests out of it and use >>> "nova-live-migration" job (in tempest check pipeline ) which is much >>> better in live migration env setup and controlled by nova. >>> - then it can be name something like >>> "tempest-scenario-multinode-lvm-multibackend". >>> - run this job in nova, cinder, neutron check pipeline instead of experimental. >> >> Like this - https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job >> >> That makes scenario job as generic with running all scenario tests >> including slow tests with concurrency 2. I made few cleanup and moved >> live migration tests out of it which is being run by >> 'nova-live-migration' job. Last patch making this job as voting on >> tempest side. >> >> If looks good, we can use this to run on project side pipeline as voting. > > Update on this thread: > Old Scenario job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" has been migrated to Tempest as new job named "tempest-scenario-all" job[1] > > Changes from old job to new job: > - This new job will run all the scenario tests including slow with lvm multibackend. Same as old job > - Executed the live migration API tests out of it. Live migration API tests runs on separate nova job "nova-live-migration". > - This new job runs as voting on Tempest check and gate pipeline. > > This is ready to use for cross project also. i have pushed the patch to nova, neutron, cinder to use this new job[3] and remove "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" from project-config[4]. > > Let me know your feedback on proposed patches. > > [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n147 > [2] https://review.openstack.org/#/q/topic:run-tempest-scenario-all-job+(status:open+OR+status:merged) > [3] https://review.openstack.org/#/q/topic:run-tempest-scenario-all-job+(status:open+OR+status:merged) > [4] https://review.openstack.org/#/q/topic:drop-legacy-scenario-job+(status:open+OR+status:merged) > >> >> -gmann >> >>> >>> Another update on slow tests is that we are trying the possibility of >>> taking back the slow tests in tempest-full with new job >>> "tempest-full-parallel" [5]. Currently this job is n-v and if >>> everything works fine in this new job then, we can make tempest-full >>> job to run the slow tests are it used to do previously. >>> >>>> >>>> [1] https://review.openstack.org/#/c/567697/ >>>> [2] >>>> http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 >>> >>> ..3 http://codesearch.openstack.org/?q=legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend&i=nope&files=&repos= >>> ..4 https://github.com/openstack/tempest/search?utf8=%E2%9C%93&q=%22type%3D%27slow%27%22&type= >>> ..5 https://github.com/openstack/tempest/blob/9c628189e798f46de8c4b9484237f4d6dc6ade7e/.zuul.yaml#L48 >>> >>> >>> -gmann >>> >>>> >>>> -- >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From zigo at debian.org Thu Jul 19 08:32:40 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 19 Jul 2018 10:32:40 +0200 Subject: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate In-Reply-To: <54887540-1350-708f-dcb4-0dec4bfac7b3@redhat.com> References: <8e139644-44f3-fe80-ecda-39c5dab06a0a@debian.org> <54887540-1350-708f-dcb4-0dec4bfac7b3@redhat.com> Message-ID: <124d0b84-a01e-e03d-f432-6a3f2047a43e@debian.org> On 07/18/2018 06:42 AM, Ian Wienand wrote: > While I'm reserved about the > idea of full platform functional tests, essentially having a > wide-variety of up-to-date tox environments using some of the methods > discussed there is, I think, a very practical way to be cow-catching > some of the bigger issues with Python version updates. If we are to > expend resources, my 2c worth is that pushing in that direction gives > the best return on effort. > > -i Hi Ian, Thanks a lot for your reply, that's very useful. I very much agree that testing the latest Qemu / libvirt could be a problem if it fails too often, and same with other components, however, these needs to be addressed anyway at some point. If we can't do it this way, then we have to define a mechanism to find out. Maybe a dvsm periodic task unrelated to a specific project would do? Anyway, my post was *not* about functional testing, so let's not talk about this. What I would love to get addressed is catching problems with newer language updates. Having them early avoids downstream distribution doing the heavy work, which is not sustainable considering the amount of people (which is about 1 or 2 guys per distro), and that's what I would like to be addressed. For example, "async" becoming a keyword in Python 3.7 is something I would have very much like to be caught by some kind of upstream CI running unit tests, rather than Debian and Ubuntu package maintainers fixing the problems as we get FTBFS (Fails To Build From Source) bugs filed in the BTS, and when we find out by ourselves that some package cannot be installed or built. This happened with oslo.messaging, taskflow, etc. This is just the new Python 3.7 things, though there was numerous problems with Python 3.6. Currently, it looks like Heat also has unit test failures in Sid (not sure yet what the issue is). Waiting for Bionic to be released to start gating unit tests on Python 3.6 is IMO a way too late, as for example Debian Sid was running Python 3.6 about a year before that, and that's what I would like to be fixed. Using either Fedora or SuSE is fine to me, as long as it gets latest Python language fast enough (does it go as fast as Debian testing?). If it's for doing unit testing only (ie: no functional tests using Qemu, libvirt and other component of this type) looks like a good plan. Cheers, Thomas Goirand (zigo) From rasca at redhat.com Thu Jul 19 10:01:16 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 19 Jul 2018 12:01:16 +0200 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: <20180718203623.GA4106@palahniuk.int.rhx> References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> <20180718203623.GA4106@palahniuk.int.rhx> Message-ID: On 18/07/2018 22:36, Michele Baldessari wrote: [...] > Besides E), I think a reasonable use case is to be able to have a small > all-in-one installation that mimicks a more "real-world" overcloud. > I think there is a bit of value in that, as long as the code to make it > happen is not horribly huge and complex (and I was under the impression > from Emilien's patchset that this is not the case) [...] Small question aside related to all-in-one: we're talking about use cases in which we might want to go from 1 to 3 controllers, but how this can become a thing? I always thought to all-in-one as a developer/ci "tool", so why we should care about giving the possibility to expand? This question is related also to the main topic of this thread: it was proposed to replace Keepalived with anything (instead of Pacemaker), and one of the outcomes was that this approach would not guarantee some of the goals, like undercloud HA and keeping 1:1 structure between undercloud and overcloud. But what else are we supposed to control with Pacemaker on the undercloud apart from the IPs? -- Raoul Scarazzini rasca at redhat.com From skaplons at redhat.com Thu Jul 19 11:09:54 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 19 Jul 2018 13:09:54 +0200 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed Message-ID: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Hi, Since some time we see that test tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment is failing sometimes. Bug about that is reported for Tempest currently [1] but after small patch [2] was merged I was today able to check what cause this issue. Test which is failing is in [3] and it looks that everything is going fine with it up to last line of test. So volume and port are created, attached, tags are set properly, both devices are detached properly also and at the end test is failing as in http://169.254.169.254/openstack/latest/meta_data.json still has some device inside. And it looks now from [4] that it is volume which isn’t removed from this meta_data.json. So I think that it would be good if people from Nova and Cinder teams could look at it and try to figure out what is going on there and how it can be fixed. Thanks in advance for help. [1] https://bugs.launchpad.net/tempest/+bug/1775947 [2] https://review.openstack.org/#/c/578765/ [3] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330 [4] http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919 — Slawek Kaplonski Senior software engineer Red Hat From thierry at openstack.org Thu Jul 19 12:24:44 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Jul 2018 14:24:44 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <052e02cf-750e-5755-2494-a2ef4ed73a3d@redhat.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> <052e02cf-750e-5755-2494-a2ef4ed73a3d@redhat.com> Message-ID: Zane Bitter wrote: > [...] >> And I'm not convinced that's an either/or choice... > > I said specifically that it's an either/or/and choice. I was speaking more about the "we need to pick between two approaches, let's document them" that the technical vision exercise started as. Basically I mean I'm missing clear examples of where pursuing AWS would mean breaking vCenter. > So it's not a binary choice but it's very much a ternary choice IMHO. > The middle ground, where each project - or even each individual > contributor within a project - picks an option independently and > proceeds on the implicit assumption that everyone else chose the same > option (although - spoiler alert - they didn't)... that's not a good > place to be. Right, so I think I'm leaning for an "and" choice. Basically OpenStack wants to be an AWS, but ended up being used a lot as a vCenter (for multiple reasons, including the limited success of US-based public cloud offerings in 2011-2016). IMHO we should continue to target an AWS, while doing our best to not break those who use it as a vCenter. Would explicitly acknowledging that (we still want to do an AWS, but we need to care about our vCenter users) get us the alignment you seek ? -- Thierry Carrez (ttx) From emilien at redhat.com Thu Jul 19 12:32:23 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 19 Jul 2018 08:32:23 -0400 Subject: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker) In-Reply-To: References: <20180716180719.GB4445@palahniuk.int.rhx> <0D8A4A7B-8F11-47C4-9D4F-7807C0B1591B@redhat.com> <20180717200025.GA23000@palahniuk.int.rhx> <44c5568628bad2724e636a3e214ac46009f02bd1.camel@redhat.com> <20180718203623.GA4106@palahniuk.int.rhx> Message-ID: On Thu, Jul 19, 2018 at 6:02 AM Raoul Scarazzini wrote: [...] > Small question aside related to all-in-one: we're talking about use > cases in which we might want to go from 1 to 3 controllers, but how this > can become a thing? I always thought to all-in-one as a developer/ci > "tool", so why we should care about giving the possibility to expand? > We have a few other use-cases but 2 of them are: - PoC deployed on the field, start with one controller, scale up to 3 controllers (with compute services deployed as well). - Edge Computing, where we could think of a controller being scaled-out as well, or a remote compute note being added, with VMs in HA with pacemaker. But I agree that the first target for now is to fulfil the developer use case, and PoC use case (on one node). This question is related also to the main topic of this thread: it was > proposed to replace Keepalived with anything (instead of Pacemaker), and > one of the outcomes was that this approach would not guarantee some of > the goals, like undercloud HA and keeping 1:1 structure between > undercloud and overcloud. But what else are we supposed to control with > Pacemaker on the undercloud apart from the IPs? > Nothing, AFIK. The VIPs were the only things we wanted to managed on a single-node undercloud. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Thu Jul 19 13:34:14 2018 From: lujinluo at gmail.com (Lujin Luo) Date: Thu, 19 Jul 2018 21:34:14 +0800 Subject: [openstack-dev] [neutron][upgrade] Skip Neutron upgrade IRC meeting on July 19th Message-ID: Hi everyone, Due to we have two core members who cannot join the weekly meeting, we think it would be better to skip this meeting and resume on next week. If you have any questions, please reply to this thread. Best regards, Lujin From jimmy at openstack.org Thu Jul 19 14:47:18 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 19 Jul 2018 09:47:18 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B4E132E.5050607@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> Message-ID: <5B50A476.8010606@openstack.org> Hi all - Follow up on the Edge paper specifically: https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 This is now available. As I mentioned on IRC this morning, it should be VERY close to the PDF. Probably just needs a quick review. Let me know if I can assist with anything. Thank you to i18n team for all of your help!!! Cheers, Jimmy Jimmy McArthur wrote: > Ian raises some great points :) I'll try to address below... > > Ian Y. Choi wrote: >> Hello, >> >> When I saw overall translation source strings on container >> whitepaper, I would infer that new edge computing whitepaper >> source strings would include HTML markup tags. > One of the things I discussed with Ian and Frank in Vancouver is the > expense of recreating PDFs with new translations. It's prohibitively > expensive for the Foundation as it requires design resources which we > just don't have. As a result, we created the Containers whitepaper in > HTML, so that it could be easily updated w/o working with outside > design contractors. I indicated that we would also be moving the Edge > paper to HTML so that we could prevent that additional design resource > cost. >> On the other hand, the source strings of edge computing whitepaper >> which I18n team previously translated do not include HTML markup >> tags, since the source strings are based on just text format. > The version that Akihiro put together was based on the Edge PDF, which > we unfortunately didn't have the resources to implement in the same > format. >> >> I really appreciate Akihiro's work on RST-based support on publishing >> translated edge computing whitepapers, since >> translators do not have to re-translate all the strings. > I would like to second this. It took a lot of initiative to work on > the RST-based translation. At the moment, it's just not usable for > the reasons mentioned above. >> On the other hand, it seems that I18n team needs to investigate on >> translating similar strings of HTML-based edge computing whitepaper >> source strings, which would discourage translators. > Can you expand on this? I'm not entirely clear on why the HTML based > translation is more difficult. >> >> That's my point of view on translating edge computing whitepaper. >> >> For translating container whitepaper, I want to further ask the >> followings since *I18n-based tools* >> would mean for translators that translators can test and publish >> translated whitepapers locally: >> >> - How to build translated container whitepaper using original >> Silverstripe-based repository? >> https://docs.openstack.org/i18n/latest/tools.html describes well >> how to build translated artifacts for RST-based OpenStack repositories >> but I could not find the way how to build translated container >> whitepaper with translated resources on Zanata. > This is a little tricky. It's possible to set up a local version of > the OpenStack website > (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md). > However, we have to manually ingest the po files as they are completed > and then push them out to production, so that wouldn't do much to help > with your local build. I'm open to suggestions on how we can make > this process easier for the i18n team. > > Thank you, > Jimmy >> >> >> With many thanks, >> >> /Ian >> >> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>> Frank, >>> >>> I'm sorry to hear about the displeasure around the Edge paper. As >>> mentioned in a prior thread, the RST format that Akihiro worked did >>> not work with the Zanata process that we have been using with our >>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>> build a new template to work with the new HTML whitepaper layout we >>> created for the Containers paper. I outlined this in the thread " >>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>> with the template around 7/13. >>> >>> We completed the work on the new whitepaper template and then put >>> out the pot files on Zanata so we can get the po language files >>> back. If this process is too cumbersome for the translation team, >>> I'm open to discussion, but right now our entire translation process >>> is based on the official OpenStack Docs translation process outlined >>> by the i18n team: >>> https://docs.openstack.org/i18n/latest/en_GB/tools.html >>> >>> Again, I realize Akihiro put in some work on his own proposing the >>> new translation type. If the i18n team is moving to this format >>> instead, we can work on redoing our process. >>> >>> Please let me know if I can clarify further. >>> >>> Thanks, >>> Jimmy >>> >>> Frank Kloeker wrote: >>>> Hi Jimmy, >>>> >>>> permission was added for you and Sebastian. The Container >>>> Whitepaper is on the Zanata frontpage now. But we removed Edge >>>> Computing whitepaper last week because there is a kind of >>>> displeasure in the team since the results of translation are still >>>> not published beside Chinese version. It would be nice if we have a >>>> commitment from the Foundation that results are published in a >>>> specific timeframe. This includes your requirements until the >>>> translation should be available. >>>> >>>> thx Frank >>>> >>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>> Sorry, I should have also added... we additionally need >>>>> permissions so >>>>> that we can add the a new version of the pot file to this project: >>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>> >>>>> >>>>> Thanks! >>>>> Jimmy >>>>> >>>>> >>>>> >>>>> Jimmy McArthur wrote: >>>>>> Hi all - >>>>>> >>>>>> We have both of the current whitepapers up and available for >>>>>> translation. Can we promote these on the Zanata homepage? >>>>>> >>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>> Thanks all! >>>>>> Jimmy >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cjeanner at redhat.com Thu Jul 19 15:30:27 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 19 Jul 2018 17:30:27 +0200 Subject: [openstack-dev] Disk space requirement - any way to lower it a little? Message-ID: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> Hello, While trying to get a new validation¹ in the undercloud preflight checks, I hit an (not so) unexpected issue with the CI: it doesn't provide flavors with the minimal requirements, at least regarding the disk space. A quick-fix is to disable the validations in the CI - Wes has already pushed a patch for that in the upstream CI: https://review.openstack.org/#/c/583275/ We can consider this as a quick'n'temporary fix². The issue is on the RDO CI: apparently, they provide instances with "only" 55G of free space, making the checks fail: https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46 So, the question is: would it be possible to lower the requirment to, let's say, 50G? Where does that 60G³ come from? Thanks for your help/feedback. Cheers, C. ¹ https://review.openstack.org/#/c/582917/ ² as you might know, there's a BP for a unified validation framework, and it will allow to get injected configuration in CI env in order to lower the requirements if necessary: https://blueprints.launchpad.net/tripleo/+spec/validation-framework ³ http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sean.mcginnis at gmx.com Thu Jul 19 15:42:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 19 Jul 2018 10:42:11 -0500 Subject: [openstack-dev] [release] Release countdown for week R-5, July 23-27 Message-ID: <20180719154211.GA6802@sm-workstation> Development Focus ----------------- Teams should be focused on implementing planned work. Work should be wrapping up on client libraries to meet the client lib deadline Thursday, the 26th. General Information ------------------- The final client library release is on Thursday the 26th. Releases will only be allowed for critical fixes in libraries after this point as we stabilize requirements and give time for any unforeseen impacts from lib changes to trickle through. If release critical library or client library releases are needed for Rocky past the freeze dates, you must request a Feature Freeze Exception (FFE) from the requirements team before we can do a new release to avoid having something released in Rocky that is not actually usable. This is done by posting to the openstack-dev mailing list with a subject line similar to: [$PROJECT][requirements] FFE requested for $PROJECT_LIB Include justification/reasoning for why a FFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. Including a link to the FFE in the release request is not required, but would be helpful in making sure we are clear to do a new release. When requesting these library releases, you should also include the stable branching request with the review (as an example, see the "branches" section here: http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) Cycle-trailing projects are reminded that all reviews to the requirements project will have a procedural -2 unless it recieves a FFE until stable/rocky is branched. Upcoming Deadlines & Dates -------------------------- Stein PTL nominations: July 24-31 (pending finalization) Final client library release deadline: July 26 Rocky-3 Milestone: July 26 RC1 deadline: August 9 -- Sean McGinnis (smcginnis) From prometheanfire at gentoo.org Thu Jul 19 15:50:06 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 19 Jul 2018 10:50:06 -0500 Subject: [openstack-dev] [release] Release countdown for week R-5, July 23-27 In-Reply-To: <20180719154211.GA6802@sm-workstation> References: <20180719154211.GA6802@sm-workstation> Message-ID: <20180719155006.uwkuxzvyrcrele7m@gentoo.org> On 18-07-19 10:42:11, Sean McGinnis wrote: > > Development Focus > ----------------- > > Teams should be focused on implementing planned work. Work should be wrapping > up on client libraries to meet the client lib deadline Thursday, the 26th. > > General Information > ------------------- > > The final client library release is on Thursday the 26th. Releases will only be > allowed for critical fixes in libraries after this point as we stabilize > requirements and give time for any unforeseen impacts from lib changes to > trickle through. > > If release critical library or client library releases are needed for Rocky > past the freeze dates, you must request a Feature Freeze Exception (FFE) from > the requirements team before we can do a new release to avoid having something > released in Rocky that is not actually usable. This is done by posting to the > openstack-dev mailing list with a subject line similar to: > > [$PROJECT][requirements] FFE requested for $PROJECT_LIB > > Include justification/reasoning for why a FFE is needed for this lib. If/when > the requirements team OKs the post-freeze update, we can then process a new > release. Including a link to the FFE in the release request is not required, > but would be helpful in making sure we are clear to do a new release. > > When requesting these library releases, you should also include the stable > branching request with the review (as an example, see the "branches" section > here: > > http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike/os-brick.yaml#n2) > > Cycle-trailing projects are reminded that all reviews to the requirements > project will have a procedural -2 unless it recieves a FFE until stable/rocky > is branched. > > Upcoming Deadlines & Dates > -------------------------- > > Stein PTL nominations: July 24-31 (pending finalization) > Final client library release deadline: July 26 > Rocky-3 Milestone: July 26 > RC1 deadline: August 9 > Projects should also make sure their requirements files are up to date as OpenStack now uses per-project requirements. Further projects should make sure they have a release containing the update. This means that updates to the requirements files falls to the individual projects and not the requirements bot. It is recommended that you have a lower-constraints.txt file and test with it to know when you need to update. See the following example for how to run a basic tox LC job. https://github.com/openstack/oslo.db/blob/master/tox.ini#L76-L81 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From pkovar at redhat.com Thu Jul 19 15:55:29 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 19 Jul 2018 17:55:29 +0200 Subject: [openstack-dev] [docs][all] Front page template for project team documentation In-Reply-To: <20180629164553.258c79a096fd7a300c31faee@redhat.com> References: <20180629164553.258c79a096fd7a300c31faee@redhat.com> Message-ID: <20180719175529.031fe344e127909028757c06@redhat.com> Hi all, A spin-off discussion in https://review.openstack.org/#/c/579177/ resulted in an idea to update our RST conventions for headings level 2 and 3 so that our guidelines follow recommendations from http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections. The updated conventions also better reflect what most projects have been using already, regardless of what was previously in our conventions. To sum up, for headings level 2, use dashes: Heading 2 --------- For headings level 3, use tildes: Heading 3 ~~~~~~~~~ For details on the change, see: https://review.openstack.org/#/c/583239/1/doc/doc-contrib-guide/source/rst-conv/titles.rst Thanks, pk On Fri, 29 Jun 2018 16:45:53 +0200 Petr Kovar wrote: > Hi all, > > Feedback from the Queens PTG included requests for the Documentation > Project to provide guidance and recommendations on how to structure common > content typically found on the front page for project team docs, located at > doc/source/index.rst in the project team repository. > > I've created a new docs spec, proposing a template to be used by project > teams, and would like to ask the OpenStack community and, specifically, the > project teams, to take a look, submit feedback on the spec, share > comments, ideas, or concerns: > > https://review.openstack.org/#/c/579177/ > > The main goal of providing and using this template is to make it easier for > users to find, navigate, and consume project team documentation, and for > contributors to set up and maintain the project team docs. > > The template would also serve as the basis for one of the future governance > docs tags, which is a long-term plan for the docs team. > > Thank you, > pk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Thu Jul 19 16:11:28 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 19 Jul 2018 16:11:28 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <542aba82-5c0c-3549-e587-2deded610fe9@gmail.com> <5e835365-2d1a-d388-66b1-88cdf8c9a0fb@redhat.com> <20521a8b-5f58-6ee0-a805-7dc9400b301b@openstack.org> <052e02cf-750e-5755-2494-a2ef4ed73a3d@redhat.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C158D7A@EX10MBOX03.pnnl.gov> The primary issue I think is that the Nova folks think there is too much in Nova already. So there are probably more features that can be done to make it more in line with vCenter, and more features to make it more functionally like AWS. And at this point, neither are probably easy to get in. Until Nova changes this stance, they are kind of forcing an either or (or neither), as Nova's position in the OpenStack community currently drives decisions in most of the other OpenStack projects. I'm not laying blame on anyone. They have a hard job to do and not enough people to do it. That forces less then ideal solutions. Not really sure how to resolve this. Deciding "we will support both" is a good first step, but there are other big problems like this that need solving before it can be more then words on a page. Thanks, Kevin ________________________________________ From: Thierry Carrez [thierry at openstack.org] Sent: Thursday, July 19, 2018 5:24 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 Zane Bitter wrote: > [...] >> And I'm not convinced that's an either/or choice... > > I said specifically that it's an either/or/and choice. I was speaking more about the "we need to pick between two approaches, let's document them" that the technical vision exercise started as. Basically I mean I'm missing clear examples of where pursuing AWS would mean breaking vCenter. > So it's not a binary choice but it's very much a ternary choice IMHO. > The middle ground, where each project - or even each individual > contributor within a project - picks an option independently and > proceeds on the implicit assumption that everyone else chose the same > option (although - spoiler alert - they didn't)... that's not a good > place to be. Right, so I think I'm leaning for an "and" choice. Basically OpenStack wants to be an AWS, but ended up being used a lot as a vCenter (for multiple reasons, including the limited success of US-based public cloud offerings in 2011-2016). IMHO we should continue to target an AWS, while doing our best to not break those who use it as a vCenter. Would explicitly acknowledging that (we still want to do an AWS, but we need to care about our vCenter users) get us the alignment you seek ? -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Thu Jul 19 16:33:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 19 Jul 2018 17:33:07 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was again very brief as this time elmiko and dtantsur were out. There were no major items of discussion, but we made plans to check on the status of the GraphQL prototyping (Hi! How's it going?). In addition to the light discussion there was also one guideline that was frozen for wider review and a new one introduced (see below). Both are realted to the handling of the "code" attribute in error responses. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Add links to errors-example.json https://review.openstack.org/#/c/578369/ # Guidelines Currently Under Review [3] * Expand schema for error.codes to reflect reality https://review.openstack.org/#/c/580703/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From alifshit at redhat.com Thu Jul 19 16:50:18 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 19 Jul 2018 12:50:18 -0400 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed In-Reply-To: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> References: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Message-ID: Because we're waiting for the volume to become available before we continue with the test [1], its tag still being present means Nova's not cleaning up the device tags on volume detach. This is most likely a bug. I'll look into it. [1] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378 On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski wrote: > Hi, > > Since some time we see that test tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment is failing sometimes. > Bug about that is reported for Tempest currently [1] but after small patch [2] was merged I was today able to check what cause this issue. > > Test which is failing is in [3] and it looks that everything is going fine with it up to last line of test. So volume and port are created, attached, tags are set properly, both devices are detached properly also and at the end test is failing as in http://169.254.169.254/openstack/latest/meta_data.json still has some device inside. > And it looks now from [4] that it is volume which isn’t removed from this meta_data.json. > So I think that it would be good if people from Nova and Cinder teams could look at it and try to figure out what is going on there and how it can be fixed. > > Thanks in advance for help. > > [1] https://bugs.launchpad.net/tempest/+bug/1775947 > [2] https://review.openstack.org/#/c/578765/ > [3] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330 > [4] http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919 > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From johnsomor at gmail.com Thu Jul 19 16:53:44 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 19 Jul 2018 09:53:44 -0700 Subject: [openstack-dev] [octavia] Make amphora-agent support http rest api In-Reply-To: References: Message-ID: I saw your storyboard for this. Thank you for creating a story. Since the controllers manage the certificates for the amphora (both generation and rotation) the overhead to an operator should be extremely low and limited to initial installation configuration. Since we have automated the certificate handling we felt it was better to only allow TLS connections for the management traffice to the amphora. Please feel free to discuss on the Storyboard story, Michael On Wed, Jul 18, 2018 at 7:52 PM Jeff Yang wrote: > > In some private cloud environments, the possibility of vm being attacked is very small, and all personnel are trusted. At this time, the administrator hopes to reduce the complexity of octavia deployment and operation and maintenance. We can let the amphora-agent provide the http api so that the administrator can ignore the issue of the certificate. > https://storyboard.openstack.org/#!/story/2003027 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Thu Jul 19 16:55:04 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Jul 2018 12:55:04 -0400 Subject: [openstack-dev] Disk space requirement - any way to lower it a little? In-Reply-To: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> References: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> Message-ID: <20180719165504.GA9267@localhost.localdomain> On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote: > Hello, > > While trying to get a new validation¹ in the undercloud preflight > checks, I hit an (not so) unexpected issue with the CI: > it doesn't provide flavors with the minimal requirements, at least > regarding the disk space. > > A quick-fix is to disable the validations in the CI - Wes has already > pushed a patch for that in the upstream CI: > https://review.openstack.org/#/c/583275/ > We can consider this as a quick'n'temporary fix². > > The issue is on the RDO CI: apparently, they provide instances with > "only" 55G of free space, making the checks fail: > https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46 > > So, the question is: would it be possible to lower the requirment to, > let's say, 50G? Where does that 60G³ come from? > > Thanks for your help/feedback. > > Cheers, > > C. > > > > ¹ https://review.openstack.org/#/c/582917/ > > ² as you might know, there's a BP for a unified validation framework, > and it will allow to get injected configuration in CI env in order to > lower the requirements if necessary: > https://blueprints.launchpad.net/tripleo/+spec/validation-framework > > ³ > http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements > Keep in mind, upstream we don't really have control over partitions of nodes, in some case it is a single, other multiple. I'd suggest looking more at: https://docs.openstack.org/infra/manual/testing.html As for downstream RDO, the same is going to apply once we start adding more cloud providers. I would look to see if you actually need that much space for deployments, and make try to mock the testing of that logic. - Paul From openstack at nemebean.com Thu Jul 19 17:14:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 19 Jul 2018 12:14:35 -0500 Subject: [openstack-dev] Disk space requirement - any way to lower it a little? In-Reply-To: <20180719165504.GA9267@localhost.localdomain> References: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> <20180719165504.GA9267@localhost.localdomain> Message-ID: <0c0ab130-8021-30b4-1d12-006754d12ef3@nemebean.com> On 07/19/2018 11:55 AM, Paul Belanger wrote: > On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote: >> Hello, >> >> While trying to get a new validation¹ in the undercloud preflight >> checks, I hit an (not so) unexpected issue with the CI: >> it doesn't provide flavors with the minimal requirements, at least >> regarding the disk space. >> >> A quick-fix is to disable the validations in the CI - Wes has already >> pushed a patch for that in the upstream CI: >> https://review.openstack.org/#/c/583275/ >> We can consider this as a quick'n'temporary fix². >> >> The issue is on the RDO CI: apparently, they provide instances with >> "only" 55G of free space, making the checks fail: >> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46 >> >> So, the question is: would it be possible to lower the requirment to, >> let's say, 50G? Where does that 60G³ come from? >> >> Thanks for your help/feedback. >> >> Cheers, >> >> C. >> >> >> >> ¹ https://review.openstack.org/#/c/582917/ >> >> ² as you might know, there's a BP for a unified validation framework, >> and it will allow to get injected configuration in CI env in order to >> lower the requirements if necessary: >> https://blueprints.launchpad.net/tripleo/+spec/validation-framework >> >> ³ >> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements >> > Keep in mind, upstream we don't really have control over partitions of nodes, in > some case it is a single, other multiple. I'd suggest looking more at: > > https://docs.openstack.org/infra/manual/testing.html And this isn't just a testing thing. As I mentioned in the previous thread, real-world users often use separate partitions for some data (logs, for example). Looking at the existing validation[1] I don't know that it would handle multiple partitions sufficiently well to turn it on by default. It's only checking /var and /, and I've seen much more complex partition layouts than that. 1: https://github.com/openstack/tripleo-validations/blob/master/validations/tasks/disk_space.yaml > > As for downstream RDO, the same is going to apply once we start adding more > cloud providers. I would look to see if you actually need that much space for > deployments, and make try to mock the testing of that logic. It's also worth noting that what we can get away with in ci is not necessarily appropriate for production. Being able to run a short-lived, single-use deployment in 50 GB doesn't mean that you could realistically run that on a long-lived production cloud. Log and database storage tends to increase over time. There should be a ceiling to how large that all grows if rotation and db cleanup is configured correctly, but that ceiling is much higher than anything ci is ever going to hit. Anecdotally, I bumped my development flavor disk space to >50 GB because I ran out of space when I built containers locally. I don't know if that's something we expect users to be doing, but it is definitely possible to exhaust 50 GB in a short period of time. From alifshit at redhat.com Thu Jul 19 17:28:39 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 19 Jul 2018 13:28:39 -0400 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed In-Reply-To: References: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Message-ID: I've proposed [1] to add extra logging on the Nova side. Let's see if that helps us catch the root cause of this. [1] https://review.openstack.org/584032 On Thu, Jul 19, 2018 at 12:50 PM, Artom Lifshitz wrote: > Because we're waiting for the volume to become available before we > continue with the test [1], its tag still being present means Nova's > not cleaning up the device tags on volume detach. This is most likely > a bug. I'll look into it. > > [1] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378 > > On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski wrote: >> Hi, >> >> Since some time we see that test tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment is failing sometimes. >> Bug about that is reported for Tempest currently [1] but after small patch [2] was merged I was today able to check what cause this issue. >> >> Test which is failing is in [3] and it looks that everything is going fine with it up to last line of test. So volume and port are created, attached, tags are set properly, both devices are detached properly also and at the end test is failing as in http://169.254.169.254/openstack/latest/meta_data.json still has some device inside. >> And it looks now from [4] that it is volume which isn’t removed from this meta_data.json. >> So I think that it would be good if people from Nova and Cinder teams could look at it and try to figure out what is going on there and how it can be fixed. >> >> Thanks in advance for help. >> >> [1] https://bugs.launchpad.net/tempest/+bug/1775947 >> [2] https://review.openstack.org/#/c/578765/ >> [3] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330 >> [4] http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919 >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > -- > Artom Lifshitz > Software Engineer, OpenStack Compute DFG -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From emilien at redhat.com Thu Jul 19 20:37:58 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 19 Jul 2018 16:37:58 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes Message-ID: Today I played a little bit with Standalone deployment [1] to deploy a single OpenStack cloud without the need of an undercloud and overcloud. The use-case I am testing is the following: "As an operator, I want to deploy a single node OpenStack, that I can extend with remote compute nodes on the edge when needed." We still have a bunch of things to figure out so it works out of the box, but so far I was able to build something that worked, and I found useful to share it early to gather some feedback: https://gitlab.com/emacchi/tripleo-standalone-edge Keep in mind this is a proof of concept, based on upstream documentation and re-using 100% what is in TripleO today. The only thing I'm doing is to change the environment and the roles for the remote compute node. I plan to work on cleaning the manual steps that I had to do to make it working, like hardcoding some hiera parameters and figure out how to override ServiceNetmap. Anyway, feel free to test / ask questions / provide feedback. Thanks, [1] https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Jul 19 23:13:46 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 19 Jul 2018 18:13:46 -0500 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: Message-ID: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> On 07/19/2018 03:37 PM, Emilien Macchi wrote: > Today I played a little bit with Standalone deployment [1] to deploy a > single OpenStack cloud without the need of an undercloud and overcloud. > The use-case I am testing is the following: > "As an operator, I want to deploy a single node OpenStack, that I can > extend with remote compute nodes on the edge when needed." > > We still have a bunch of things to figure out so it works out of the > box, but so far I was able to build something that worked, and I found > useful to share it early to gather some feedback: > https://gitlab.com/emacchi/tripleo-standalone-edge > > Keep in mind this is a proof of concept, based on upstream documentation > and re-using 100% what is in TripleO today. The only thing I'm doing is > to change the environment and the roles for the remote compute node. > I plan to work on cleaning the manual steps that I had to do to make it > working, like hardcoding some hiera parameters and figure out how to > override ServiceNetmap. > > Anyway, feel free to test / ask questions / provide feedback. What is the benefit of doing this over just using deployed server to install a remote server from the central management system? You need to have connectivity back to the central location anyway. Won't this become unwieldy with a large number of edge nodes? I thought we told people not to use Packstack for multi-node deployments for exactly that reason. I guess my concern is that eliminating the undercloud makes sense for single-node PoC's and development work, but for what sounds like a production workload I feel like you're cutting off your nose to spite your face. In the interest of saving one VM's worth of resources, now all of your day 2 operations have no built-in orchestration. Every time you want to change a configuration it's "copy new script to system, ssh to system, run script, repeat for all systems. So maybe this is a backdoor way to make Ansible our API? ;-) From y.furukawa_2 at jp.fujitsu.com Fri Jul 20 02:05:04 2018 From: y.furukawa_2 at jp.fujitsu.com (Furukawa, Yushiro) Date: Fri, 20 Jul 2018 02:05:04 +0000 Subject: [openstack-dev] [neutron] [neutron-fwaas] Feature Freeze for logging feature Message-ID: Hi Miguel, I'd like to ask Feature Freeze Exeption regarding FWaaS v2 logging. Following patches are under reviewing now. So, could you please add these patches into FFE? 01. openstack/neutron-fwaas https://review.openstack.org/#/c/530694 02. openstack/neutron-fwaas https://review.openstack.org/#/c/553738 03. openstack/neutron-fwaas https://review.openstack.org/#/c/580976 04. openstack/neutron-fwaas https://review.openstack.org/#/c/574128 05. openstack/neutron-fwaas https://review.openstack.org/#/c/532792 06. openstack/neutron-fwaas https://review.openstack.org/#/c/576338 07. openstack/neutron-fwaas https://review.openstack.org/#/c/530715 08. openstack/neutron-fwaas https://review.openstack.org/#/c/578718 09. openstack/neutron https://review.openstack.org/#/c/534227 10. openstack/neutron https://review.openstack.org/#/c/529814 11. openstack/neutron https://review.openstack.org/#/c/580575 12. openstack/neutron https://review.openstack.org/#/c/582498 We're focus on reviewing/testing these patches now. In addition, please take a look 4 neutron patches :) It is very helpful for us. Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Jul 20 03:05:26 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 20 Jul 2018 13:05:26 +1000 Subject: [openstack-dev] [stable][meta] Proposing Retiring the Stable Branch project Message-ID: <20180720030525.GC30070@thor.bakeyournoodle.com> team and Opening the Extended Maintenance SIG Reply-To: Hello folks, So really the subject says it all. I fell like at the time we created the Stable branch project team that was the only option. Since then we have crated the SIG structure and in my opinion that's a better fit. We've also transition from 'Stable Branch Maintenance' to 'Extended Maintenance' Being a SIG will make it explicit that we *need* operator, user and developer contributions. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Jul 20 03:08:00 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 20 Jul 2018 13:08:00 +1000 Subject: [openstack-dev] [stable][meta] Proposing Retiring the Stable Branch project In-Reply-To: <20180720030525.GC30070@thor.bakeyournoodle.com> References: <20180720030525.GC30070@thor.bakeyournoodle.com> Message-ID: <20180720030800.GD30070@thor.bakeyournoodle.com> On Fri, Jul 20, 2018 at 01:05:26PM +1000, Tony Breeds wrote: > > Hello folks, > So really the subject says it all. I fell like at the time we > created the Stable branch project team that was the only option. Since > then we have crated the SIG structure and in my opinion that's a better > fit. We've also transition from 'Stable Branch Maintenance' to > 'Extended Maintenance' > > Being a SIG will make it explicit that we *need* operator, user and > developer contributions. I meant to say I've created: https://review.openstack.org/584205 and https://review.openstack.org/584206 To make this transition. Thoughts? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From bdobreli at redhat.com Fri Jul 20 06:18:54 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 20 Jul 2018 09:18:54 +0300 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> Message-ID: On 7/20/18 2:13 AM, Ben Nemec wrote: > > > On 07/19/2018 03:37 PM, Emilien Macchi wrote: >> Today I played a little bit with Standalone deployment [1] to deploy a >> single OpenStack cloud without the need of an undercloud and overcloud. >> The use-case I am testing is the following: >> "As an operator, I want to deploy a single node OpenStack, that I can >> extend with remote compute nodes on the edge when needed." >> >> We still have a bunch of things to figure out so it works out of the >> box, but so far I was able to build something that worked, and I found >> useful to share it early to gather some feedback: >> https://gitlab.com/emacchi/tripleo-standalone-edge >> >> Keep in mind this is a proof of concept, based on upstream >> documentation and re-using 100% what is in TripleO today. The only >> thing I'm doing is to change the environment and the roles for the >> remote compute node. >> I plan to work on cleaning the manual steps that I had to do to make >> it working, like hardcoding some hiera parameters and figure out how >> to override ServiceNetmap. >> >> Anyway, feel free to test / ask questions / provide feedback. > > What is the benefit of doing this over just using deployed server to > install a remote server from the central management system?  You need to > have connectivity back to the central location anyway.  Won't this > become unwieldy with a large number of edge nodes?  I thought we told > people not to use Packstack for multi-node deployments for exactly that > reason. > > I guess my concern is that eliminating the undercloud makes sense for > single-node PoC's and development work, but for what sounds like a > production workload I feel like you're cutting off your nose to spite > your face.  In the interest of saving one VM's worth of resources, now > all of your day 2 operations have no built-in orchestration.  Every time > you want to change a configuration it's "copy new script to system, ssh > to system, run script, repeat for all systems.  So maybe this is a > backdoor way to make Ansible our API? ;-) Ansible may orchestrate that for day 2. Deploying Heat stacks is already made ephemeral for standalone/underclouds so only thing you'll need for day 2 is ansible really. Hence, the need of undercloud shrinks into having an ansible control node, like your laptop, to control all clouds via inventory. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Irc #bogdando From cjeanner at redhat.com Fri Jul 20 07:49:27 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 20 Jul 2018 09:49:27 +0200 Subject: [openstack-dev] Disk space requirement - any way to lower it a little? In-Reply-To: <20180719165504.GA9267@localhost.localdomain> References: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> <20180719165504.GA9267@localhost.localdomain> Message-ID: <4e779c22-9d09-b298-c292-e66ab1718df2@redhat.com> On 07/19/2018 06:55 PM, Paul Belanger wrote: > On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote: >> Hello, >> >> While trying to get a new validation¹ in the undercloud preflight >> checks, I hit an (not so) unexpected issue with the CI: >> it doesn't provide flavors with the minimal requirements, at least >> regarding the disk space. >> >> A quick-fix is to disable the validations in the CI - Wes has already >> pushed a patch for that in the upstream CI: >> https://review.openstack.org/#/c/583275/ >> We can consider this as a quick'n'temporary fix². >> >> The issue is on the RDO CI: apparently, they provide instances with >> "only" 55G of free space, making the checks fail: >> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46 >> >> So, the question is: would it be possible to lower the requirment to, >> let's say, 50G? Where does that 60G³ come from? >> >> Thanks for your help/feedback. >> >> Cheers, >> >> C. >> >> >> >> ¹ https://review.openstack.org/#/c/582917/ >> >> ² as you might know, there's a BP for a unified validation framework, >> and it will allow to get injected configuration in CI env in order to >> lower the requirements if necessary: >> https://blueprints.launchpad.net/tripleo/+spec/validation-framework >> >> ³ >> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements >> > Keep in mind, upstream we don't really have control over partitions of nodes, in > some case it is a single, other multiple. I'd suggest looking more at: After some checks on y locally deployed containerized undercloud (hence, Rocky) without real activity, here's what I could get: - most data are located in /var - this explains the current check. If we go a bit deeper, here are the "actually used" directory in /var/lib: 20K alternatives 36K certmonger 4.0K chrony 1.2G config-data 4.0K dhclient 6.0G docker 28K docker-config-scripts 92K docker-container-startup-configs.json 44K docker-puppet 592K heat-config 832K ironic 4.0K ironic-inspector 236K kolla 4.0K logrotate 286M mysql 48K neutron 4.0K ntp 4.0K postfix 872K puppet 3.8M rabbitmq 59M rpm 4.0K rsyslog 64K systemd 20K tripleo 236K tripleo-config 9.8M yum 7.5G total Most of the "default installer" partition schema don't go further than putting /var, /tmp, /home and /usr in dedicated volumes - of course, end-user can chose to ignore that and provide a custom schema. That said, we can get the "used" paths. In addition to /var/lib, there's obviously /usr. We might want to: - loop on known locations - check if they are on dedicated mount points - check the available disk space on those mount points. An interesting thing in bash: df /var/lib/docker Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 104846316 10188828 94657488 10% / This allows to get: - actual volume - free space on the volume. More than that, we might also try to figure out some pattern. For instance, "docker" seems to be a pretty good candidate for space, as it will get the images and container data. This is probably even the biggest eater, at least on the undercloud - as well as the logs (/var/logs). We might do a check ensuring we can, at least, DEPLOY the app. This would require far less than the required 60G, and with a proper doc announcing that, we can get a functional test, aiming on its purpose: ensure we can deploy (so asking, let's say, 10G in /var/lib/docker, 5G in /var/lib/config-data, 5G in /usr, 1G in /var/log) and, later, upgrade (requiring the same amount of *free* space). That would require some changes in the validation check of course. But at least, we might get a pretty nice covering, while allowing it to run smoothly in the CI. But, as said: proper documentation should be set, and the "60G minimum required" should be rephrased in order to point the locations needing space (with the appropriate warning about "none exhaustiveness" and the like). Would that suit better the actual needs, and allow to get a proper disk space check/validation? Cheers, C. > > https://docs.openstack.org/infra/manual/testing.html > > As for downstream RDO, the same is going to apply once we start adding more > cloud providers. I would look to see if you actually need that much space for > deployments, and make try to mock the testing of that logic. > > - Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ccamacho at redhat.com Fri Jul 20 08:07:24 2018 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Fri, 20 Jul 2018 10:07:24 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits Message-ID: Hi!!! I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the TripleO upgrades bits. He shows a constant and active involvement in improving and fixing our updates/upgrades workflows, he helps also trying to develop/improve/fix our upstream support for testing the updates/upgrades. Please vote -1/+1, and consider this my +1 vote :) [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa Cheers, Carlos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri Jul 20 08:10:28 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 20 Jul 2018 11:10:28 +0300 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: On 7/20/18 11:07 AM, Carlos Camacho Gonzalez wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all > the TripleO upgrades bits. He shows a constant and active involvement in > improving and fixing our updates/upgrades workflows, he helps also > trying to develop/improve/fix our upstream support for testing the > updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) +1! > > [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com > [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa > > Cheers, > Carlos. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From marios at redhat.com Fri Jul 20 08:12:28 2018 From: marios at redhat.com (Marios Andreou) Date: Fri, 20 Jul 2018 11:12:28 +0300 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: On Fri, Jul 20, 2018 at 11:07 AM, Carlos Camacho Gonzalez < ccamacho at redhat.com> wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the > TripleO upgrades bits. He shows a constant and active involvement in > improving and fixing our updates/upgrades workflows, he helps also trying > to develop/improve/fix our upstream support for testing the > updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) > +1 of course! > > > [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com > [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa > > Cheers, > Carlos. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Fri Jul 20 08:20:44 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 20 Jul 2018 10:20:44 +0200 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: Message-ID: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> On 07/19/2018 10:37 PM, Emilien Macchi wrote: > Today I played a little bit with Standalone deployment [1] to deploy a > single OpenStack cloud without the need of an undercloud and overcloud. > The use-case I am testing is the following: > "As an operator, I want to deploy a single node OpenStack, that I can > extend with remote compute nodes on the edge when needed." > > We still have a bunch of things to figure out so it works out of the > box, but so far I was able to build something that worked, and I found > useful to share it early to gather some feedback: >   https://gitlab.com/emacchi/tripleo-standalone-edge > > Keep in mind this is a proof of concept, based on upstream documentation > and re-using 100% what is in TripleO today. The only thing I'm doing is > to change the environment and the roles for the remote compute node. > I plan to work on cleaning the manual steps that I had to do to make it > working, like hardcoding some hiera parameters and figure out how to > override ServiceNetmap. > > Anyway, feel free to test / ask questions / provide feedback. hi Emilien, thanks for sharing this. I have started experimenting with edge deployments to help out on the split-controplane spec [1], which Steven started addressing I was able to deploy multiple stacks and isolated Ceph clusters, there are some bits missing to provision a working configuration for nova-compute to the edge services, but we could probably collect/export the necessary outputs from the parent stack (eg. rabbit connection infos) and feed the edge stacks with those. A much bigger challenge seems to me that for some services (eg. glance or cinder) we need to "refresh" the configuration of the controlplane nodes to push the details of the newly deployed ceph clusters (backends) of the edge nodes as backends for the controlplane services. Alternatively, we could opt for the deployment of cinder-volume instances on the edge nodes, but we would still have the same problem for glance and possibly other services. I'd like to discuss further this topic at the PTG to gether more feedback so I added a bullet to the pad with the Stein PTG topics [2]. 1. https://blueprints.launchpad.net/tripleo/+spec/split-controlplane 2. https://etherpad.openstack.org/p/tripleo-ptg-stein -- Giulio Fidente GPG KEY: 08D733BA From yprokule at redhat.com Fri Jul 20 08:23:20 2018 From: yprokule at redhat.com (Yurii Prokulevych) Date: Fri, 20 Jul 2018 10:23:20 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: <1532075000.3477.0.camel@redhat.com> On Fri, 2018-07-20 at 10:07 +0200, Carlos Camacho Gonzalez wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all > the TripleO upgrades bits. He shows a constant and active involvement > in improving and fixing our updates/upgrades workflows, he helps also > trying to develop/improve/fix our upstream support for testing the > updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) +1 > > [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com > [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfra > ncoa > > Cheers, > Carlos. > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Fri Jul 20 08:41:24 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 20 Jul 2018 10:41:24 +0200 Subject: [openstack-dev] [stable][meta] Proposing Retiring the Stable Branch project In-Reply-To: <20180720030800.GD30070@thor.bakeyournoodle.com> References: <20180720030525.GC30070@thor.bakeyournoodle.com> <20180720030800.GD30070@thor.bakeyournoodle.com> Message-ID: <0dfa5950-a3f1-e8f8-66e6-94d462c5d823@openstack.org> Tony Breeds wrote: > On Fri, Jul 20, 2018 at 01:05:26PM +1000, Tony Breeds wrote: >> >> Hello folks, >> So really the subject says it all. I fell like at the time we >> created the Stable branch project team that was the only option. Since >> then we have crated the SIG structure and in my opinion that's a better >> fit. We've also transition from 'Stable Branch Maintenance' to >> 'Extended Maintenance' >> >> Being a SIG will make it explicit that we *need* operator, user and >> developer contributions. > > I meant to say I've created: > https://review.openstack.org/584205 and > https://review.openstack.org/584206 > > To make this transition. I think it makes a lot of sense. Stable branch maintenance was always a bit of an odd duck in the project teams (owning no repository), and is technically a downstream activity (post-release) with lots of potential to get users involved. -- Thierry Carrez (ttx) From emilien at redhat.com Fri Jul 20 11:00:51 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Jul 2018 07:00:51 -0400 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: On Fri, Jul 20, 2018 at 4:09 AM Carlos Camacho Gonzalez wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the > TripleO upgrades bits. He shows a constant and active involvement in > improving and fixing our updates/upgrades workflows, he helps also trying > to develop/improve/fix our upstream support for testing the > updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) > Nice work indeed, +1. Keep doing a good job and thanks for all your help! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jul 20 11:09:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Jul 2018 07:09:33 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> References: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> Message-ID: On Fri, Jul 20, 2018 at 4:20 AM Giulio Fidente wrote: [...] > I have started experimenting with edge deployments to help out on the > split-controplane spec [1], which Steven started addressing > > I was able to deploy multiple stacks and isolated Ceph clusters, there > are some bits missing to provision a working configuration for > nova-compute to the edge services, but we could probably collect/export > the necessary outputs from the parent stack (eg. rabbit connection > infos) and feed the edge stacks with those. > Indeed, I faced the exact same problems. I could hardcode the rabbit password and memcache IP via hieradata extraconfig, James showed me AllNodesExtraMapData done via https://review.openstack.org/#/c/581080/ which I'll probably give a try. However I couldn't set keystone url for nova / neutron (they are taken from ServiceNetMap). James pointed out to me this patch: https://review.openstack.org/#/c/521929/ - Do you think we should re-use the service net map from the central node, on the edge compute node? A much bigger challenge seems to me that for some services (eg. glance > or cinder) we need to "refresh" the configuration of the controlplane > nodes to push the details of the newly deployed ceph clusters (backends) > of the edge nodes as backends for the controlplane services. > Yeah I thought about this one too but I didn't have this challenge since I just wanted nova-compute & neutron-ovs-agent running on the edge. Alternatively, we could opt for the deployment of cinder-volume > instances on the edge nodes, but we would still have the same problem > for glance and possibly other services. > For now the only thing I see is to manually update the config on the central node and run the deployment again, which should reconfigure the containers. I'd like to discuss further this topic at the PTG to gether more > feedback so I added a bullet to the pad with the Stein PTG topics [2]. It would be awesome to spend time on this topic! Thanks for bringing this blueprint up! Indeed I hope we'll make progress on this one at the PTG, which is why I sent this email really early to groom some ideas. > 1. https://blueprints.launchpad.net/tripleo/+spec/split-controlplane > 2. https://etherpad.openstack.org/p/tripleo-ptg-stein Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Fri Jul 20 11:10:37 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Fri, 20 Jul 2018 08:10:37 -0300 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach In-Reply-To: References: <20180717215359.GA31698@sm-workstation> <20180718090227.thr2kb2336vptaos@localhost> Message-ID: Nice, good to know. Thanks all for the feedback. We will fix that in our drivers. @Walter, so, in this case, if Cinder has the connector, it should not need to call the driver passing a None object right? Erlon Em qua, 18 de jul de 2018 às 12:56, Walter Boring escreveu: > The whole purpose of this test is to simulate the case where Nova doesn't > know where the vm is anymore, > or may simply not exist, but we need to clean up the cinder side of > things. That being said, with the new > attach API, the connector is being saved in the cinder database for each > volume attachment. > > Walt > > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor > wrote: > >> On 17/07, Sean McGinnis wrote: >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote: >> > > Hi Cinder and Nova folks, >> > > >> > > Working on some tests for our drivers, I stumbled upon this tempest >> test >> > > 'force_detach_volume' >> > > that is calling Cinder API passing a 'None' connector. At the time >> this was >> > > added several CIs >> > > went down, and people started discussing whether this >> (accepting/sending a >> > > None connector) >> > > would be the proper behavior for what is expected to a driver to >> do[1]. So, >> > > some of CIs started >> > > just skipping that test[2][3][4] and others implemented fixes that >> made the >> > > driver to disconnected >> > > the volume from all hosts if a None connector was received[5][6][7]. >> > >> > Right, it was determined the correct behavior for this was to >> disconnect the >> > volume from all hosts. The CIs that are skipping this test should stop >> doing so >> > (once their drivers are fixed of course). >> > >> > > >> > > While implementing this fix seems to be straightforward, I feel that >> just >> > > removing the volume >> > > from all hosts is not the correct thing to do mainly considering that >> we >> > > can have multi-attach. >> > > >> > >> > I don't think multiattach makes a difference here. Someone is forcibly >> > detaching the volume and not specifying an individual connection. So >> based on >> > that, Cinder should be removing any connections, whether that is to one >> or >> > several hosts. >> > >> >> Hi, >> >> I agree with Sean, drivers should remove all connections for the volume. >> >> Even without multiattach there are cases where you'll have multiple >> connections for the same volume, like in a Live Migration. >> >> It's also very useful when Nova and Cinder get out of sync and your >> volume has leftover connections. In this case if you try to delete the >> volume you get a "volume in use" error from some drivers. >> >> Cheers, >> Gorka. >> >> >> > > So, my questions are: What is the best way to fix this problem? Should >> > > Cinder API continue to >> > > accept detachments with None connectors? If, so, what would be the >> effects >> > > on other Nova >> > > attachments for the same volume? Is there any side effect if the >> volume is >> > > not multi-attached? >> > > >> > > Additionally to this thread here, I should bring this topic to >> tomorrow's >> > > Cinder's meeting, >> > > so please join if you have something to share. >> > > >> > >> > +1 - good plan. >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Jul 20 11:31:42 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 20 Jul 2018 11:31:42 +0000 Subject: [openstack-dev] [edge][glance]: Image handling in edge environment In-Reply-To: References: Message-ID: Hi, We figured out with Jokke two timeslots what would be okay for both of us for this common meeting. Please, other interested parties give your votes to here: https://doodle.com/poll/9rfcb8aavsmybzfu I will evaluate the results and fix the time on 25.07.2018 12h CET. Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, July 18, 2018 10:02 AM To: 'edge-computing' ; OpenStack Development Mailing List (not for usage questions) Subject: [edge][glance]: Image handling in edge environment Hi, We had a great Forum session about image handling in edge environment in Vancouver [1]. As one outcome of the session I've created a wiki with the mentioned architecture options [1]. During the Edge Working Group [3] discussions we identified some questions (some of them are in the wiki, some of them are in mails [4]) and also I would like to get some feedback on the analyzis in the wiki from people who know Glance. I think the best would be to have some kind of meeting and I see two options to organize this: * Organize a dedicated meeting for this * Add this topic as an agenda point to the Glance weekly meeting Please share your preference and/or opinion. Thanks, Gerg0 [1]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [3]: https://wiki.openstack.org/wiki/Edge_Computing_Group [4]: http://lists.openstack.org/pipermail/edge-computing/2018-June/000239.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jul 20 11:43:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Jul 2018 07:43:29 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> Message-ID: On Fri, Jul 20, 2018 at 7:09 AM Emilien Macchi wrote: > Yeah I thought about this one too but I didn't have this challenge since I > just wanted nova-compute & neutron-ovs-agent running on the edge. > Actually I just faced it: Error: Failed to perform requested operation on instance "my-vm", the instance has an error status: Please try again later [Error: Host 'standalone-cpu-edge.localdomain' is not mapped to any cell]. I had to manually add the edge compute on the central node, so yeah we need to figure that out for the compute as well (unless I missed something in the nova config). -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Jul 20 11:44:57 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 20 Jul 2018 07:44:57 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> Message-ID: On Fri, Jul 20, 2018 at 7:09 AM, Emilien Macchi wrote: > > > On Fri, Jul 20, 2018 at 4:20 AM Giulio Fidente wrote: > [...] >> >> I have started experimenting with edge deployments to help out on the >> split-controplane spec [1], which Steven started addressing >> >> I was able to deploy multiple stacks and isolated Ceph clusters, there >> are some bits missing to provision a working configuration for >> nova-compute to the edge services, but we could probably collect/export >> the necessary outputs from the parent stack (eg. rabbit connection >> infos) and feed the edge stacks with those. > > > Indeed, I faced the exact same problems. I could hardcode the rabbit > password and memcache IP via hieradata extraconfig, James showed me > AllNodesExtraMapData done via https://review.openstack.org/#/c/581080/ which > I'll probably give a try. > However I couldn't set keystone url for nova / neutron (they are taken from > ServiceNetMap). > James pointed out to me this patch: https://review.openstack.org/#/c/521929/ Emilien/Giulio: These are 3 patches (2 are from shardy) that I've been testing with split-controlplane: https://review.openstack.org/#/c/521928/ https://review.openstack.org/#/c/521929/ https://review.openstack.org/#/c/581080/ I'll pull some docs together if I have some initial success. -- -- James Slagle -- From cjeanner at redhat.com Fri Jul 20 11:48:53 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 20 Jul 2018 13:48:53 +0200 Subject: [openstack-dev] Disk space requirement - any way to lower it a little? In-Reply-To: <4e779c22-9d09-b298-c292-e66ab1718df2@redhat.com> References: <6a9fc615-c819-7c6e-244f-9c40054a6b49@redhat.com> <20180719165504.GA9267@localhost.localdomain> <4e779c22-9d09-b298-c292-e66ab1718df2@redhat.com> Message-ID: <8c51552b-2573-995c-d60e-358d2bad0282@redhat.com> On 07/20/2018 09:49 AM, Cédric Jeanneret wrote: > > > On 07/19/2018 06:55 PM, Paul Belanger wrote: >> On Thu, Jul 19, 2018 at 05:30:27PM +0200, Cédric Jeanneret wrote: >>> Hello, >>> >>> While trying to get a new validation¹ in the undercloud preflight >>> checks, I hit an (not so) unexpected issue with the CI: >>> it doesn't provide flavors with the minimal requirements, at least >>> regarding the disk space. >>> >>> A quick-fix is to disable the validations in the CI - Wes has already >>> pushed a patch for that in the upstream CI: >>> https://review.openstack.org/#/c/583275/ >>> We can consider this as a quick'n'temporary fix². >>> >>> The issue is on the RDO CI: apparently, they provide instances with >>> "only" 55G of free space, making the checks fail: >>> https://logs.rdoproject.org/17/582917/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9cf398/logs/undercloud/home/zuul/undercloud_install.log.txt.gz#_2018-07-17_10_23_46 >>> >>> So, the question is: would it be possible to lower the requirment to, >>> let's say, 50G? Where does that 60G³ come from? >>> >>> Thanks for your help/feedback. >>> >>> Cheers, >>> >>> C. >>> >>> >>> >>> ¹ https://review.openstack.org/#/c/582917/ >>> >>> ² as you might know, there's a BP for a unified validation framework, >>> and it will allow to get injected configuration in CI env in order to >>> lower the requirements if necessary: >>> https://blueprints.launchpad.net/tripleo/+spec/validation-framework >>> >>> ³ >>> http://tripleo.org/install/environments/baremetal.html#minimum-system-requirements >>> >> Keep in mind, upstream we don't really have control over partitions of nodes, in >> some case it is a single, other multiple. I'd suggest looking more at: > > After some checks on y locally deployed containerized undercloud (hence, > Rocky) without real activity, here's what I could get: > - most data are located in /var - this explains the current check. > > If we go a bit deeper, here are the "actually used" directory in /var/lib: > 20K alternatives > 36K certmonger > 4.0K chrony > 1.2G config-data > 4.0K dhclient > 6.0G docker > 28K docker-config-scripts > 92K docker-container-startup-configs.json > 44K docker-puppet > 592K heat-config > 832K ironic > 4.0K ironic-inspector > 236K kolla > 4.0K logrotate > 286M mysql > 48K neutron > 4.0K ntp > 4.0K postfix > 872K puppet > 3.8M rabbitmq > 59M rpm > 4.0K rsyslog > 64K systemd > 20K tripleo > 236K tripleo-config > 9.8M yum > 7.5G total > > Most of the "default installer" partition schema don't go further than > putting /var, /tmp, /home and /usr in dedicated volumes - of course, > end-user can chose to ignore that and provide a custom schema. > > That said, we can get the "used" paths. In addition to /var/lib, there's > obviously /usr. > > We might want to: > - loop on known locations > - check if they are on dedicated mount points > - check the available disk space on those mount points. > > An interesting thing in bash: > df /var/lib/docker > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/sda1 104846316 10188828 94657488 10% / > > This allows to get: > - actual volume > - free space on the volume. > > More than that, we might also try to figure out some pattern. For > instance, "docker" seems to be a pretty good candidate for space, as it > will get the images and container data. This is probably even the > biggest eater, at least on the undercloud - as well as the logs (/var/logs). > > We might do a check ensuring we can, at least, DEPLOY the app. This > would require far less than the required 60G, and with a proper doc > announcing that, we can get a functional test, aiming on its purpose: > ensure we can deploy (so asking, let's say, 10G in /var/lib/docker, 5G > in /var/lib/config-data, 5G in /usr, 1G in /var/log) and, later, upgrade > (requiring the same amount of *free* space). > > That would require some changes in the validation check of course. But > at least, we might get a pretty nice covering, while allowing it to run > smoothly in the CI. > But, as said: proper documentation should be set, and the "60G minimum > required" should be rephrased in order to point the locations needing > space (with the appropriate warning about "none exhaustiveness" and the > like). > > Would that suit better the actual needs, and allow to get a proper disk > space check/validation? > > Cheers, > > C. Following those thoughts, here's a proposal, to be discussed, augmented, enhanced: https://review.openstack.org/#/c/584314/ This should allow to get a really nice space check, and in addition allow ops to create a layout suited for the undercloud if they want - getting dedicated volumes for specific uses, allowing to get a smart monitoring of the disk usage per resources is always good. It kind of also allow to sort out the issue of the CI, providing we update the doc to reflect the "new" reality of this validation and expose the "real" needs of the undercloud regarding disk space. An operator will also more agree to give space if he knows why. What do you think? Cheers C. > > >> >> https://docs.openstack.org/infra/manual/testing.html >> >> As for downstream RDO, the same is going to apply once we start adding more >> cloud providers. I would look to see if you actually need that much space for >> deployments, and make try to mock the testing of that logic. >> >> - Paul >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From emilien at redhat.com Fri Jul 20 12:01:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Jul 2018 08:01:12 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <79fdd12b-e0e1-6c14-ce22-897960ae40f2@redhat.com> Message-ID: On Fri, Jul 20, 2018 at 7:43 AM Emilien Macchi wrote: > On Fri, Jul 20, 2018 at 7:09 AM Emilien Macchi wrote: > >> Yeah I thought about this one too but I didn't have this challenge since >> I just wanted nova-compute & neutron-ovs-agent running on the edge. >> > > Actually I just faced it: > Error: Failed to perform requested operation on instance "my-vm", the > instance has an error status: Please try again later [Error: Host > 'standalone-cpu-edge.localdomain' is not mapped to any cell]. > > I had to manually add the edge compute on the central node, so yeah we > need to figure that out for the compute as well (unless I missed something > in the nova config). > Nevermind, I had to set NovaSchedulerDiscoverHostsInCellsInterval to 300, so nova-schedule checks for new compute nodes every 300s and include them in the cell. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Jul 20 12:29:08 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 20 Jul 2018 13:29:08 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-29 Message-ID: HTML: https://anticdent.org/placement-update-18-29.html This is placement update 18-28, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). Thanks to Jay for providing one of these last week when I was away: # Most Important Feature freeze is next week. We're racing now to get as much of three styles of work done as possible: * Effectively managing nested and shared resource providers when managing allocations (such as in migrations). * Correctly handling resource provider and consumer generations in the nova-side report client. * Supporting reshaping provider trees. The latter two are actively in progress. Not sure about the first. Anyone? As ever, we continue to find bugs with existing features that existing tests are not catching. These are being found by people experimenting. So: experiment please. # What's Changed Most of the functionality and fixes related to consumer generations is in place on the placement side. We now enforce that consumer identifiers are uuids. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 15, no change from last week. * [In progress placement bugs](https://goo.gl/vzGGDQ) 15, +1 on last week. # Main Themes ## Documentation This is a section for reminding us to document all the fun stuff we are enabling. Open areas include: * "How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?!", says an enthusiastic Matt Riedemann. * The whens and wheres of re-shaping and VGPUs. ## Consumer Generations These are in place on the placement side. There's some pending work on using them properly and addresssing some nits: * Address nits from consumer generation * return 404 when no consumer found in allocs * Use placement 1.28 in scheduler report client (1.28 is consumer gens) * Use consumer generation in _heal_allocations_for_instance ## Reshape Provider Trees The work to support a /reshaper URI that allows moving inventory and allocations between resource providers is in progress. The database handling (at the bottom of the stack) is pretty much ready, the HTTP API is close except for a [small issue with allocation schema](https://review.openstack.org/#/c/583907/), and the nova side is in active progress. That's all at: ## Mirror Host Aggregates This needs a command line tool: * ## Extraction I took some time yesterday to experiment with an alternative to the os-resource-classes that [jay created](https://github.com/jaypipes/os-resource-classes). [My version](https://github.com/cdent/os-resource-classes) is, thus far, just a simple spike that makes symbols pointing to strings, and that's it. I've made a [proof of concept](https://review.openstack.org/#/c/584084/) of integrating it with placement. Other extraction things that continue to need some thought are: * infra and co-gating issues that are going to come up * copying whatever nova-based test fixture we might like # Other 20 entries two weeks ago. 29 now. * Purge comp_node and res_prvdr records during deletion of cells/hosts * Get resource provider by uuid or name (osc-placement) * Check provider generation and retry on conflict * Add unit test for non-placement resize * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams * [placement] api-ref: add traits parameter * Convert 'placement_api_docs' into a Sphinx extension * Test for multiple limit/group_policy qparams * Disable limits if force_hosts or force_nodes is set * Rename auth_uri to www_authenticate_uri * Blazar's work on using placement * Add placement.concurrent_udpate to generation pre-checks * [placement] disallow additional fields in allocations * Delete allocations when it is re-allocated (This is addressing a TODO in the report client) * local disk inventory reporting related * Delete orphan compute nodes before updating resources * Consider forbidden traits in early exit of _get_by_one_request (Another TODO-related fix) * Remove Ocata comments which expires now * Ignore some updates from virt driver * Docs: Add Placement to Nova system architecture * Resource provider examples (osc-placement) # End Thanks to everyone for all their hard work making this happen. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jfrancoa at redhat.com Fri Jul 20 12:55:19 2018 From: jfrancoa at redhat.com (Jose Luis Franco Arza) Date: Fri, 20 Jul 2018 14:55:19 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: Thank you very much to all for the recognition. I will use this power with responsibility, as Uncle Ben once said: https://giphy.com/gifs/MCZ39lz83o5lC/fullscreen Regards, Jose Luis On Fri, Jul 20, 2018 at 1:00 PM, Emilien Macchi wrote: > > > On Fri, Jul 20, 2018 at 4:09 AM Carlos Camacho Gonzalez < > ccamacho at redhat.com> wrote: > >> Hi!!! >> >> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the >> TripleO upgrades bits. He shows a constant and active involvement in >> improving and fixing our updates/upgrades workflows, he helps also trying >> to develop/improve/fix our upstream support for testing the >> updates/upgrades. >> >> Please vote -1/+1, and consider this my +1 vote :) >> > > Nice work indeed, +1. Keep doing a good job and thanks for all your help! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri Jul 20 12:57:13 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 20 Jul 2018 15:57:13 +0300 Subject: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud" In-Reply-To: References: Message-ID: <8125c1c8-bfce-f25b-df6c-a3321ca7b272@redhat.com> On 7/16/18 6:32 PM, Dan Prince wrote: > On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret wrote: >> >> Dear Stackers, >> >> In order to let operators properly validate their undercloud node, I >> propose to create a new subcommand in the "openstack undercloud" "tree": >> `openstack undercloud validate' >> >> This should only run the different validations we have in the >> undercloud_preflight.py¹ >> That way, an operator will be able to ensure all is valid before >> starting "for real" any other command like "install" or "upgrade". >> >> Of course, this "validate" step is embedded in the "install" and >> "upgrade" already, but having the capability to just validate without >> any further action is something that can be interesting, for example: >> >> - ensure the current undercloud hardware/vm is sufficient for an update >> - ensure the allocated VM for the undercloud is sufficient for a deploy >> - and so on >> >> There are probably other possibilities, if we extend the "validation" >> scope outside the "undercloud" (like, tripleo, allinone, even overcloud). >> >> What do you think? Any pros/cons/thoughts? > > I think this command could be very useful. I'm assuming the underlying > implementation would call a 'heat stack-validate' using an ephemeral > heat-all instance. If so way we implement it for the undercloud vs the I think that should be just ansible commands triggered natively via tripleoclient. Why would we validate with heat deploying a throwaway one-time ephemeral stacks (for undercloud/standalon) each time a user runs that heat installer? We had to introduce the virtual stack state tracking system [0], for puppet manifests compatibility sakes only (it sometimes rely on states CREATE vs UPDATE), which added more "ephemeral complexity" in DF. I'm not following why would we validate ephemeral stacks or using it as an additional moving part? [0] https://review.openstack.org/#/q/topic:bug/1778505+(status:open+OR+status:merged) > 'standalone' use case would likely be a bit different. We can probably > subclass the implementations to share common code across the efforts > though. > > For the undercloud you are likely to have a few extra 'local only' > validations. Perhaps extra checks for things on the client side. > > For the all-in-one I had envisioned using the output from the 'heat > stack-validate' to create a sample config file for a custom set of > services. Similar to how tools like Packstack generate a config file > for example. > > Dan > >> >> Cheers, >> >> C. >> >> >> >> ¹ >> http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py >> -- >> Cédric Jeanneret >> Software Engineer >> DFG:DF >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From thierry at openstack.org Fri Jul 20 14:44:35 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 20 Jul 2018 16:44:35 +0200 Subject: [openstack-dev] [all] [ptg] PTG track schedule published Message-ID: Hi everyone, Last month we published the tentative schedule layout for the 5 days of PTG. There was no major complaint, so that was confirmed as the PTG event schedule and published on the PTG website: https://www.openstack.org/ptg#tab_schedule You'll notice that: - The Ops meetup days were added. - Keystone track is split in two: one day on Monday for cross-project discussions around identity management, and two days on Thursday/Friday for team discussions. - The "Ask me anything" project helproom on Monday/Tuesday is for horizontal support teams (infrastructure, release management, stable maint, requirements...) to provide support for other teams, SIGs and workgroups and answer their questions. Goal champions should also be available there to help with Stein goal completion questions. - Like in Dublin, a number of tracks do not get pre-allocated time, and will be scheduled on the spot in available rooms at the time that makes the most sense for the participants. - Every track will be able to book extra time and space in available extra rooms at the event. To find more information about the event, register or book a room at the event hotel, visit: https://www.openstack.org/ptg Note that the second (and last) round of applications for travel support to the event is closing at the end of next week (July 29th) ! Apply if you need financial help attending the event: https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 See you there ! -- Thierry Carrez (ttx) From thierry at openstack.org Fri Jul 20 14:57:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 20 Jul 2018 16:57:20 +0200 Subject: [openstack-dev] [all] [ptg] PTG track schedule published In-Reply-To: References: Message-ID: Thierry Carrez wrote: > Hi everyone, > > Last month we published the tentative schedule layout for the 5 days of > PTG. There was no major complaint, so that was confirmed as the PTG > event schedule and published on the PTG website: > > https://www.openstack.org/ptg#tab_schedule The tab temporarily disappeared, while it is being restored you can access the schedule at: https://docs.google.com/spreadsheets/d/e/2PACX-1vRM2UIbpnL3PumLjRaso_9qpOfnyV9VrPqGbTXiMVNbVgjiR3SIdl8VSBefk339MhrbJO5RficKt2Rr/pubhtml?gid=1156322660&single=true -- Thierry Carrez (ttx) From aspiers at suse.com Fri Jul 20 16:30:03 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 20 Jul 2018 17:30:03 +0100 Subject: [openstack-dev] [self-healing] [ptg] PTG track schedule published In-Reply-To: References: Message-ID: <20180720163003.qfmg37ccmeetwkxk@pacific.linksys.moosehall> Thierry Carrez wrote: >Thierry Carrez wrote: >>Hi everyone, >> >>Last month we published the tentative schedule layout for the 5 days >>of PTG. There was no major complaint, so that was confirmed as the >>PTG event schedule and published on the PTG website: >> >>https://www.openstack.org/ptg#tab_schedule > >The tab temporarily disappeared, while it is being restored you can >access the schedule at: > >https://docs.google.com/spreadsheets/d/e/2PACX-1vRM2UIbpnL3PumLjRaso_9qpOfnyV9VrPqGbTXiMVNbVgjiR3SIdl8VSBefk339MhrbJO5RficKt2Rr/pubhtml?gid=1156322660&single=true Apologies - I have had to change plans and leave on the Thursday evening (old friend is getting married on Saturday morning). Is there any chance of swapping the self-healing slot with one of the others? Sorry for having to ask! Adam From thierry at openstack.org Fri Jul 20 16:46:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 20 Jul 2018 18:46:03 +0200 Subject: [openstack-dev] [self-healing] [ptg] PTG track schedule published In-Reply-To: <20180720163003.qfmg37ccmeetwkxk@pacific.linksys.moosehall> References: <20180720163003.qfmg37ccmeetwkxk@pacific.linksys.moosehall> Message-ID: <6e100f5e-af06-c417-007f-3631e0e63edd@openstack.org> Adam Spiers wrote: > Apologies - I have had to change plans and leave on the Thursday > evening (old friend is getting married on Saturday morning).  Is there > any chance of swapping the self-healing slot with one of the others? It's tricky, as you asked to avoid conflicts with API SIG, Watcher, Monasca, Masakari, and Mistral... Which day would be best for you given the current schedule (assuming we don't move anything else as it's too late for that). -- Thierry Carrez (ttx) From openstack at nemebean.com Fri Jul 20 17:20:53 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 20 Jul 2018 12:20:53 -0500 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> Message-ID: On 07/20/2018 01:18 AM, Bogdan Dobrelya wrote: > On 7/20/18 2:13 AM, Ben Nemec wrote: >> >> >> On 07/19/2018 03:37 PM, Emilien Macchi wrote: >>> Today I played a little bit with Standalone deployment [1] to deploy >>> a single OpenStack cloud without the need of an undercloud and >>> overcloud. >>> The use-case I am testing is the following: >>> "As an operator, I want to deploy a single node OpenStack, that I can >>> extend with remote compute nodes on the edge when needed." >>> >>> We still have a bunch of things to figure out so it works out of the >>> box, but so far I was able to build something that worked, and I >>> found useful to share it early to gather some feedback: >>> https://gitlab.com/emacchi/tripleo-standalone-edge >>> >>> Keep in mind this is a proof of concept, based on upstream >>> documentation and re-using 100% what is in TripleO today. The only >>> thing I'm doing is to change the environment and the roles for the >>> remote compute node. >>> I plan to work on cleaning the manual steps that I had to do to make >>> it working, like hardcoding some hiera parameters and figure out how >>> to override ServiceNetmap. >>> >>> Anyway, feel free to test / ask questions / provide feedback. >> >> What is the benefit of doing this over just using deployed server to >> install a remote server from the central management system?  You need >> to have connectivity back to the central location anyway.  Won't this >> become unwieldy with a large number of edge nodes?  I thought we told >> people not to use Packstack for multi-node deployments for exactly >> that reason. >> >> I guess my concern is that eliminating the undercloud makes sense for >> single-node PoC's and development work, but for what sounds like a >> production workload I feel like you're cutting off your nose to spite >> your face.  In the interest of saving one VM's worth of resources, now >> all of your day 2 operations have no built-in orchestration.  Every >> time you want to change a configuration it's "copy new script to >> system, ssh to system, run script, repeat for all systems.  So maybe >> this is a backdoor way to make Ansible our API? ;-) > > Ansible may orchestrate that for day 2. Deploying Heat stacks is already > made ephemeral for standalone/underclouds so only thing you'll need for > day 2 is ansible really. Hence, the need of undercloud shrinks into > having an ansible control node, like your laptop, to control all clouds > via inventory. So I guess the answer to my last question is yes. :-) Are we planning to reimplement all of our API workflows in Ansible or are users expected to do that themselves? From openstack at nemebean.com Fri Jul 20 18:54:49 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 20 Jul 2018 13:54:49 -0500 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> Message-ID: <5d5456f2-4b9d-c0b6-1645-d478f16c0960@nemebean.com> Okay, based on a private conversation this is coming off as way more troll-ish than I intended. I don't understand where this work is going, but I don't really need to so I'll step back from the discussion. Apologies for any offense. -Ben On 07/20/2018 12:20 PM, Ben Nemec wrote: > > > On 07/20/2018 01:18 AM, Bogdan Dobrelya wrote: >> On 7/20/18 2:13 AM, Ben Nemec wrote: >>> >>> >>> On 07/19/2018 03:37 PM, Emilien Macchi wrote: >>>> Today I played a little bit with Standalone deployment [1] to deploy >>>> a single OpenStack cloud without the need of an undercloud and >>>> overcloud. >>>> The use-case I am testing is the following: >>>> "As an operator, I want to deploy a single node OpenStack, that I >>>> can extend with remote compute nodes on the edge when needed." >>>> >>>> We still have a bunch of things to figure out so it works out of the >>>> box, but so far I was able to build something that worked, and I >>>> found useful to share it early to gather some feedback: >>>> https://gitlab.com/emacchi/tripleo-standalone-edge >>>> >>>> Keep in mind this is a proof of concept, based on upstream >>>> documentation and re-using 100% what is in TripleO today. The only >>>> thing I'm doing is to change the environment and the roles for the >>>> remote compute node. >>>> I plan to work on cleaning the manual steps that I had to do to make >>>> it working, like hardcoding some hiera parameters and figure out how >>>> to override ServiceNetmap. >>>> >>>> Anyway, feel free to test / ask questions / provide feedback. >>> >>> What is the benefit of doing this over just using deployed server to >>> install a remote server from the central management system?  You need >>> to have connectivity back to the central location anyway.  Won't this >>> become unwieldy with a large number of edge nodes?  I thought we told >>> people not to use Packstack for multi-node deployments for exactly >>> that reason. >>> >>> I guess my concern is that eliminating the undercloud makes sense for >>> single-node PoC's and development work, but for what sounds like a >>> production workload I feel like you're cutting off your nose to spite >>> your face.  In the interest of saving one VM's worth of resources, >>> now all of your day 2 operations have no built-in orchestration. >>> Every time you want to change a configuration it's "copy new script >>> to system, ssh to system, run script, repeat for all systems.  So >>> maybe this is a backdoor way to make Ansible our API? ;-) >> >> Ansible may orchestrate that for day 2. Deploying Heat stacks is >> already made ephemeral for standalone/underclouds so only thing you'll >> need for day 2 is ansible really. Hence, the need of undercloud >> shrinks into having an ansible control node, like your laptop, to >> control all clouds via inventory. > > So I guess the answer to my last question is yes. :-) > > Are we planning to reimplement all of our API workflows in Ansible or > are users expected to do that themselves? > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Fri Jul 20 19:06:42 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Jul 2018 15:06:42 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: <5d5456f2-4b9d-c0b6-1645-d478f16c0960@nemebean.com> References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> <5d5456f2-4b9d-c0b6-1645-d478f16c0960@nemebean.com> Message-ID: On Fri, Jul 20, 2018 at 2:55 PM Ben Nemec wrote: > Okay, based on a private conversation this is coming off as way more > troll-ish than I intended. I don't understand where this work is going, > but I don't really need to so I'll step back from the discussion. > Apologies for any offense. > No offense here, Ben. In fact I hope we can still continue to have a productive discussion here. I'm speaking on my own view now, and I'm happy to be wrong and learn but I wanted to explore how far we can bring the work around standalone architecture. If it was worth exploring making it "multi-node" somehow, what would be our technical challenges and more than anything else: what use-case we would enable. I'm actually quite happy to see that people already looked at some of these challenges before (see what Giulio / James / Steve H. already worked on), so I guess it makes sense to continue the investigation. We are not making any decision right now in what API we plan to use. The current production architecture is still undercloud + overcloud, and our day 2 operations are done by Mistral/Heat for now but as we transition more to Ansible I think we wanted to explore more options. I hope this little background helped. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Fri Jul 20 19:49:04 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Fri, 20 Jul 2018 21:49:04 +0200 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> <5d5456f2-4b9d-c0b6-1645-d478f16c0960@nemebean.com> Message-ID: <3cecab9063041f20497ce22afc1f2eb6fbdb0c09.camel@redhat.com> On Fri, 2018-07-20 at 15:06 -0400, Emilien Macchi wrote: > > > On Fri, Jul 20, 2018 at 2:55 PM Ben Nemec > wrote: > > Okay, based on a private conversation this is coming off as way > > more > > troll-ish than I intended. I don't understand where this work is > > going, > > but I don't really need to so I'll step back from the discussion. > > Apologies for any offense. > > No offense here, Ben. In fact I hope we can still continue to have a > productive discussion here. > > I'm speaking on my own view now, and I'm happy to be wrong and learn > but I wanted to explore how far we can bring the work around > standalone architecture. If it was worth exploring making it "multi- > node" somehow, what would be our technical challenges and more than > anything else: what use-case we would enable. > > I'm actually quite happy to see that people already looked at some of > these challenges before (see what Giulio / James / Steve H. already > worked on), so I guess it makes sense to continue the investigation. > We are not making any decision right now in what API we plan to use. > The current production architecture is still undercloud + overcloud, > and our day 2 operations are done by Mistral/Heat for now but as we > transition more to Ansible I think we wanted to explore more options. > > I hope this little background helped. > Thanks, > -- > Emilien Macchi > The split-stack work is interesting. I'm however not convinced driving the standalone for the edge use cases. (Like Ben I don't have enough background ...) However, this is the spec of an edge user: https://review.openstack.org/543936 - they want ironic to deploy their nodes and I bet choosing os-net-config is a tripleo influenced choice ... I think for these users an undercloud that can deploy multiple overclouds would make more sense. I.e Deploy one undercloud and use that to deploy a number of overclouds. I imagine split stack being used to separate compute and controllers. Something like: controlplane-overcloud-a + compute-overcloud-a0 + compute-overcloud-a1 + compute-overcloud-a2. 3x stacks building one cloud using split-stack. Then the undercloud must be able to deploy b, c, d etc stack sets similar to the a stacks. This keeps the central management system but we can manage multiple clouds and for edge-cases we can do scale/update/upgrade operations on the stack for compute/storage nodes in each edge individually etc. -- Harald From james.slagle at gmail.com Fri Jul 20 19:53:07 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 20 Jul 2018 15:53:07 -0400 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> Message-ID: On Thu, Jul 19, 2018 at 7:13 PM, Ben Nemec wrote: > > > On 07/19/2018 03:37 PM, Emilien Macchi wrote: >> >> Today I played a little bit with Standalone deployment [1] to deploy a >> single OpenStack cloud without the need of an undercloud and overcloud. >> The use-case I am testing is the following: >> "As an operator, I want to deploy a single node OpenStack, that I can >> extend with remote compute nodes on the edge when needed." >> >> We still have a bunch of things to figure out so it works out of the box, >> but so far I was able to build something that worked, and I found useful to >> share it early to gather some feedback: >> https://gitlab.com/emacchi/tripleo-standalone-edge >> >> Keep in mind this is a proof of concept, based on upstream documentation >> and re-using 100% what is in TripleO today. The only thing I'm doing is to >> change the environment and the roles for the remote compute node. >> I plan to work on cleaning the manual steps that I had to do to make it >> working, like hardcoding some hiera parameters and figure out how to >> override ServiceNetmap. >> >> Anyway, feel free to test / ask questions / provide feedback. > > > What is the benefit of doing this over just using deployed server to install > a remote server from the central management system? You need to have > connectivity back to the central location anyway. Won't this become > unwieldy with a large number of edge nodes? I thought we told people not to > use Packstack for multi-node deployments for exactly that reason. > > I guess my concern is that eliminating the undercloud makes sense for > single-node PoC's and development work, but for what sounds like a > production workload I feel like you're cutting off your nose to spite your > face. In the interest of saving one VM's worth of resources, now all of > your day 2 operations have no built-in orchestration. Every time you want > to change a configuration it's "copy new script to system, ssh to system, > run script, repeat for all systems. So maybe this is a backdoor way to make > Ansible our API? ;-) I believe Emilien was looking at this POC in part because of some input from me, so I will attempt to address your questions constructively. What you're looking at here is exactly a POC. The deployment is a POC using the experimental standalone code. I think the use case as presented by Emilien is something worth considering: >> "As an operator, I want to deploy a single node OpenStack, that I can >> extend with remote compute nodes on the edge when needed." I wouldn't interpret that to mean much of anything around eliminating the undercloud, other than what is stated for the use case. I feel that jumping to eliminating the undercloud would be an over simplification. The goal of the POC isn't packstack parity, or even necessarily a packstack like architecture. One of the goals is to see if we can deploy separate disconnected stacks for Control and Compute. The standalone work happens to be a good way to test out some of the work around that. The use case was written to help describe and provide an overall picture of what is going on with this specific POC, with a focus towards the edge use case. You make some points about centralized management and connectivity back to the central location. Those are the exact sorts of things we are thinking about when we consider how we will address edge deployments. If you haven't had a chance yet, check out the Edge Computing whitepaper from the foundation: https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf Particularly the challenges outlined around management and deployment tooling. For lack of anything better I'm calling these the 3 D's: - Decentralized - Distributed - Disconnected How can TripleO address any of these? For Decentralized, I'd like to see better separation between the planning and application of the deployment in TripleO. TripleO has had the concept of a plan for quite a while, and we've been using it very effectively for our deployment, but it is somewhat hidden from the operator. It's not entirely clear to the user that there is any separation between the plan and the stack, and what benefit there even is in the plan. I'd like to address some of that through API improvements around plan management and making the plan the top level thing being managed instead of a deployment. We're already moving in this direction with config-download and a lot of the changes we've made during Queens. For better or worse, some other tools like Terraform call this out as one their main differentiators: https://www.terraform.io/intro/vs/cloudformation.html (3rd paragraph). TripleO has long separated the planning and application phases. We just need to do a better job at developing useful features around that work. The UI has been taking advantage of it more than anything else at this point. I'd like to focus a bit more on what benefits we get from the plan, and how we can turn these into operator value. Imagine a scenario where you have a plan that has been deployed, and you want to make some changes. You upload a new plan, the plan is processed, we update a copy of the deployed stack (or perhaps ephemeral stack), run config-download, and the operator has the immediate feedback about what *would* be changed. Heat plays a role here in giving us a way to orchestrate the plan into a deployment model. Ansible also plays a role in that we could take things a step further and run with --check to provide further feedback before anything is ever applied or updated. Ongoing work around new baremetal management workflows via metalsmith will give us more insight into planning the baremetal deployment. These tools (Heat/Ansible/Metalsmith/etc), they are technology choices. They are not architectures in and of themselves. You have centralized management of the planning phase, whose output could be a set of playbooks applied in a decentralized way, such as provided via an API and downloaded to a remote site where an operator is sitting in a emergency response scenario with some "hardware in a box" that they want to deploy local compute/storage resources on to, and connect to a local network. Connectivity back to the centralized platform may or may not be required depending on what services are deployed. For Distributed, I think of git. We have built-in git management of the config-download output. We are discussing (further) git management of the templates and processed plan. This gives operators some ability to manage the output in a distributive fashion, and make new changes outside of the centralized platform. Perhaps in the future, we could offer an API/interface around pulling any changes back into the represented plan based on what an operator had changed. Sort of like a pull request for the plan, but by starting with the output. Obviously, this needs a lot more definition and refining other than just "use git". Again, these efforts are about experimenting with use cases, not technology choices. To get us to those experiments quickly, it may look like we are making rash decisions about use X or Y, but that's not the driver here. For Disconnected, it also ties into how we'd address decentralized and distributed. The choice of tooling helps, but it's not as simple as "use Ansible". Part of the reason we are looking at this POC, and how to deploy it easily is to investigate questions such as what happens to the deployed workloads if the compute loses connectivity to the control plane or management platform. We want to make sure TripleO can deploy something that can handle these sorts of scenarios. During periods of disconnection at the edge or other remote sites, operators may still need to make changes (see points about distributed above). Using the standalone deployment can help us quickly answer these questions and develop a "Steel Thread"[1] to build upon. Ultimately, this is the sort of high level designs and architectures we are beginning to investigate. We are trying to let the use cases and operator need address the design, even while the use cases are still being better understood (see above whitepaper). It's not about "just use Ansible" or "rewrite the API". [1] http://www.agiledevelopment.org/agile-talk/111-defining-acceptance-criteria-using-the-steel-thread-concept -- -- James Slagle -- From openstack at nemebean.com Fri Jul 20 21:43:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 20 Jul 2018 16:43:47 -0500 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> Message-ID: <9d51f1bd-1e49-5ab0-a15e-6931ee505f05@nemebean.com> On 07/20/2018 02:53 PM, James Slagle wrote: > On Thu, Jul 19, 2018 at 7:13 PM, Ben Nemec wrote: >> >> >> On 07/19/2018 03:37 PM, Emilien Macchi wrote: >>> >>> Today I played a little bit with Standalone deployment [1] to deploy a >>> single OpenStack cloud without the need of an undercloud and overcloud. >>> The use-case I am testing is the following: >>> "As an operator, I want to deploy a single node OpenStack, that I can >>> extend with remote compute nodes on the edge when needed." >>> >>> We still have a bunch of things to figure out so it works out of the box, >>> but so far I was able to build something that worked, and I found useful to >>> share it early to gather some feedback: >>> https://gitlab.com/emacchi/tripleo-standalone-edge >>> >>> Keep in mind this is a proof of concept, based on upstream documentation >>> and re-using 100% what is in TripleO today. The only thing I'm doing is to >>> change the environment and the roles for the remote compute node. >>> I plan to work on cleaning the manual steps that I had to do to make it >>> working, like hardcoding some hiera parameters and figure out how to >>> override ServiceNetmap. >>> >>> Anyway, feel free to test / ask questions / provide feedback. >> >> >> What is the benefit of doing this over just using deployed server to install >> a remote server from the central management system? You need to have >> connectivity back to the central location anyway. Won't this become >> unwieldy with a large number of edge nodes? I thought we told people not to >> use Packstack for multi-node deployments for exactly that reason. >> >> I guess my concern is that eliminating the undercloud makes sense for >> single-node PoC's and development work, but for what sounds like a >> production workload I feel like you're cutting off your nose to spite your >> face. In the interest of saving one VM's worth of resources, now all of >> your day 2 operations have no built-in orchestration. Every time you want >> to change a configuration it's "copy new script to system, ssh to system, >> run script, repeat for all systems. So maybe this is a backdoor way to make >> Ansible our API? ;-) > > I believe Emilien was looking at this POC in part because of some > input from me, so I will attempt to address your questions > constructively. > > What you're looking at here is exactly a POC. The deployment is a POC > using the experimental standalone code. I think the use case as > presented by Emilien is something worth considering: > >>> "As an operator, I want to deploy a single node OpenStack, that I can >>> extend with remote compute nodes on the edge when needed." > > I wouldn't interpret that to mean much of anything around eliminating > the undercloud, other than what is stated for the use case. I feel > that jumping to eliminating the undercloud would be an over > simplification. The goal of the POC isn't packstack parity, or even > necessarily a packstack like architecture. Okay, this was the main disconnect for me. I got the impression from the discussion up til now that eliminating the undercloud was part of the requirements. Looking back at Emilien's original email I think I conflated the standalone PoC description with the use-case description. My bad. > > One of the goals is to see if we can deploy separate disconnected > stacks for Control and Compute. The standalone work happens to be a > good way to test out some of the work around that. The use case was > written to help describe and provide an overall picture of what is > going on with this specific POC, with a focus towards the edge use > case. > > You make some points about centralized management and connectivity > back to the central location. Those are the exact sorts of things we > are thinking about when we consider how we will address edge > deployments. If you haven't had a chance yet, check out the Edge > Computing whitepaper from the foundation: > > https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf > > Particularly the challenges outlined around management and deployment > tooling. For lack of anything better I'm calling these the 3 D's: > - Decentralized > - Distributed > - Disconnected > > How can TripleO address any of these? > > For Decentralized, I'd like to see better separation between the > planning and application of the deployment in TripleO. TripleO has had > the concept of a plan for quite a while, and we've been using it very > effectively for our deployment, but it is somewhat hidden from the > operator. It's not entirely clear to the user that there is any > separation between the plan and the stack, and what benefit there even > is in the plan. +1. I was disappointed that we didn't adopt the plan as more of a first-class citizen for cli deployments after it was implemented. > > I'd like to address some of that through API improvements around plan > management and making the plan the top level thing being managed > instead of a deployment. We're already moving in this direction with > config-download and a lot of the changes we've made during Queens. > > For better or worse, some other tools like Terraform call this out as > one their main differentiators: > > https://www.terraform.io/intro/vs/cloudformation.html (3rd paragraph). > > TripleO has long separated the planning and application phases. We > just need to do a better job at developing useful features around that > work. The UI has been taking advantage of it more than anything else > at this point. I'd like to focus a bit more on what benefits we get > from the plan, and how we can turn these into operator value. > > Imagine a scenario where you have a plan that has been deployed, and > you want to make some changes. You upload a new plan, the plan is > processed, we update a copy of the deployed stack (or perhaps > ephemeral stack), run config-download, and the operator has the > immediate feedback about what *would* be changed. Heat plays a role > here in giving us a way to orchestrate the plan into a deployment > model. > > Ansible also plays a role in that we could take things a step further > and run with --check to provide further feedback before anything is > ever applied or updated. Ongoing work around new baremetal management > workflows via metalsmith will give us more insight into planning the > baremetal deployment. These tools (Heat/Ansible/Metalsmith/etc), they > are technology choices. They are not architectures in and of > themselves. > > You have centralized management of the planning phase, whose output > could be a set of playbooks applied in a decentralized way, such as > provided via an API and downloaded to a remote site where an operator > is sitting in a emergency response scenario with some "hardware in a > box" that they want to deploy local compute/storage resources on to, > and connect to a local network. Connectivity back to the centralized > platform may or may not be required depending on what services are > deployed. > > For Distributed, I think of git. We have built-in git management of > the config-download output. We are discussing (further) git management > of the templates and processed plan. This gives operators some ability > to manage the output in a distributive fashion, and make new changes > outside of the centralized platform. > > Perhaps in the future, we could offer an API/interface around pulling > any changes back into the represented plan based on what an operator > had changed. Sort of like a pull request for the plan, but by starting > with the output. > > Obviously, this needs a lot more definition and refining other than > just "use git". Again, these efforts are about experimenting with use > cases, not technology choices. To get us to those experiments quickly, > it may look like we are making rash decisions about use X or Y, but > that's not the driver here. +1 again. I argued to use git as the storage backend for plans in the first place. :-) This isn't the exact use case I had in mind, but there's definitely overlap. > > For Disconnected, it also ties into how we'd address decentralized and > distributed. The choice of tooling helps, but it's not as simple as > "use Ansible". Part of the reason we are looking at this POC, and how > to deploy it easily is to investigate questions such as what happens > to the deployed workloads if the compute loses connectivity to the > control plane or management platform. We want to make sure TripleO can > deploy something that can handle these sorts of scenarios. During > periods of disconnection at the edge or other remote sites, operators > may still need to make changes (see points about distributed above). This is a requirement I was missing as well. If you don't necessarily have connectivity back to the mothership and need to be able to manage the deployment anyway then the standalone part is obviously a necessity. I'd be curious how this works with OpenStack in general, but like you said this is a PoC to find out. > > Using the standalone deployment can help us quickly answer these > questions and develop a "Steel Thread"[1] to build upon. > > Ultimately, this is the sort of high level designs and architectures > we are beginning to investigate. We are trying to let the use cases > and operator need address the design, even while the use cases are > still being better understood (see above whitepaper). It's not about > "just use Ansible" or "rewrite the API". > > [1] http://www.agiledevelopment.org/agile-talk/111-defining-acceptance-criteria-using-the-steel-thread-concept > > From openstack at nemebean.com Fri Jul 20 21:46:56 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 20 Jul 2018 16:46:56 -0500 Subject: [openstack-dev] [tripleo] prototype with standalone mode and remote edge compute nodes In-Reply-To: References: <01b99bed-21a7-160f-5186-094aacd5760b@nemebean.com> <5d5456f2-4b9d-c0b6-1645-d478f16c0960@nemebean.com> Message-ID: <4fc6f52d-e5da-c0d4-1945-fac6327b617c@nemebean.com> On 07/20/2018 02:06 PM, Emilien Macchi wrote: > > > On Fri, Jul 20, 2018 at 2:55 PM Ben Nemec > wrote: > > Okay, based on a private conversation this is coming off as way more > troll-ish than I intended.  I don't understand where this work is > going, > but I don't really need to so I'll step back from the discussion. > Apologies for any offense. > > > No offense here, Ben. In fact I hope we can still continue to have a > productive discussion here. > > I'm speaking on my own view now, and I'm happy to be wrong and learn but > I wanted to explore how far we can bring the work around standalone > architecture. If it was worth exploring making it "multi-node" somehow, > what would be our technical challenges and more than anything else: what > use-case we would enable. > > I'm actually quite happy to see that people already looked at some of > these challenges before (see what Giulio / James / Steve H. already > worked on), so I guess it makes sense to continue the investigation. > We are not making any decision right now in what API we plan to use. The > current production architecture is still undercloud + overcloud, and our > day 2 operations are done by Mistral/Heat for now but as we transition > more to Ansible I think we wanted to explore more options. > > I hope this little background helped. Yeah, I realize now that I invented some requirements that you didn't actually state in your original email. Slap on the wrist to me for poor reading comprehension. :-) From Jean-Philippe at evrard.me Sat Jul 21 08:54:43 2018 From: Jean-Philippe at evrard.me (Jean-Philippe Evrard) Date: Sat, 21 Jul 2018 08:54:43 +0000 Subject: [openstack-dev] [docs][all] Front page template for project team documentation In-Reply-To: <20180719175529.031fe344e127909028757c06@redhat.com> References: <20180629164553.258c79a096fd7a300c31faee@redhat.com> <20180719175529.031fe344e127909028757c06@redhat.com> Message-ID: <952DAFD6-5A54-4037-B2B5-3AB15676FEB8@evrard.me> Is there a lint tool that can catch incoherent markup at a global project level (vs at a page gen level)? Any tool to catch these issues would help. JP. On July 19, 2018 3:55:29 PM UTC, Petr Kovar wrote: >Hi all, > >A spin-off discussion in https://review.openstack.org/#/c/579177/ >resulted >in an idea to update our RST conventions for headings level 2 and 3 so >that >our guidelines follow recommendations from >http://docutils.sourceforge.net/docs/user/rst/quickstart.html#sections. > >The updated conventions also better reflect what most projects have >been >using already, regardless of what was previously in our conventions. > >To sum up, for headings level 2, use dashes: > >Heading 2 >--------- > >For headings level 3, use tildes: > >Heading 3 >~~~~~~~~~ > >For details on the change, see: > >https://review.openstack.org/#/c/583239/1/doc/doc-contrib-guide/source/rst-conv/titles.rst > >Thanks, >pk > > >On Fri, 29 Jun 2018 16:45:53 +0200 >Petr Kovar wrote: > >> Hi all, >> >> Feedback from the Queens PTG included requests for the Documentation >> Project to provide guidance and recommendations on how to structure >common >> content typically found on the front page for project team docs, >located at >> doc/source/index.rst in the project team repository. >> >> I've created a new docs spec, proposing a template to be used by >project >> teams, and would like to ask the OpenStack community and, >specifically, the >> project teams, to take a look, submit feedback on the spec, share >> comments, ideas, or concerns: >> >> https://review.openstack.org/#/c/579177/ >> >> The main goal of providing and using this template is to make it >easier for >> users to find, navigate, and consume project team documentation, and >for >> contributors to set up and maintain the project team docs. >> >> The template would also serve as the basis for one of the future >governance >> docs tags, which is a long-term plan for the docs team. >> >> Thank you, >> pk >> >> >__________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From persia at shipstone.jp Sat Jul 21 15:00:26 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Sat, 21 Jul 2018 08:00:26 -0700 Subject: [openstack-dev] Election Season, PTL July 2018 Message-ID: <20180721080026.f27ad719b43b69db2252f6d4@shipstone.jp> Election season is nearly here! Election details: https://governance.openstack.org/election/ Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Please note, if only one candidate is nominated as PTL for a program during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a program's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, [1] https://governance.openstack.org/election/#election-officials -- Emmet HIKORY From zhipengh512 at gmail.com Mon Jul 23 06:19:47 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Jul 2018 14:19:47 +0800 Subject: [openstack-dev] [cyborg]Nominate Zhenghao Wang as new core reviewer Message-ID: Hi Team, If you have been closely part of the rocky crazy development process, then you know Zhenghao :) Zhenghao is an open source engineer from Lenovo and has been very active in Cyborg project in Rocky cycle. He has helped finished os-acc lib setup, worked with coco to demonstrate the first ever working Cyborg demo at Vancouver Summit, and lead on the GPU driver development as well as many other critical patches at the moment. His stats could be found at [0] and [1]. As part of the tradition, please feedback any of your concern you might have for this nomination, if there is no objection the nomination will go into effect next Monday. [0] http://stackalytics.com/?module=cyborg-group [1] https://review.openstack.org/#/q/project:openstack/cyborg+owner:%22wangzhh+%253Cwangzh21%2540lenovo.com%253E%22 -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Mon Jul 23 08:12:13 2018 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Mon, 23 Jul 2018 10:12:13 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: Thank you Jose Luis for the work, Let's keep the thread open until July 31st and iif there is no veto I'll grant you the correct permissions. Cheers, Carlos. On Fri, Jul 20, 2018 at 2:55 PM, Jose Luis Franco Arza wrote: > Thank you very much to all for the recognition. > I will use this power with responsibility, as Uncle Ben once said: > https://giphy.com/gifs/MCZ39lz83o5lC/fullscreen > > Regards, > Jose Luis > > On Fri, Jul 20, 2018 at 1:00 PM, Emilien Macchi > wrote: > >> >> >> On Fri, Jul 20, 2018 at 4:09 AM Carlos Camacho Gonzalez < >> ccamacho at redhat.com> wrote: >> >>> Hi!!! >>> >>> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all >>> the TripleO upgrades bits. He shows a constant and active involvement in >>> improving and fixing our updates/upgrades workflows, he helps also trying >>> to develop/improve/fix our upstream support for testing the >>> updates/upgrades. >>> >>> Please vote -1/+1, and consider this my +1 vote :) >>> >> >> Nice work indeed, +1. Keep doing a good job and thanks for all your help! >> -- >> Emilien Macchi >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lujinluo at gmail.com Mon Jul 23 08:16:42 2018 From: lujinluo at gmail.com (Lujin Luo) Date: Mon, 23 Jul 2018 16:16:42 +0800 Subject: [openstack-dev] [neutron] Bug deputy report 07/16/2018 - 07/22/2018 Message-ID: Hello everyone, I am on bug deputy from July 16th to 22th. Here is a brief summary of the bugs reported during this period. In total we have 6 bugs reported last week. 1. https://bugs.launchpad.net/neutron/+bug/1781892 - confirmed (Low). QoS related. Proposed patch to add a clarification that QoS policy attached to a floating IP will not be automatically associated and visible in port's ``qos_policy_id`` field after associating a floating IP to a port. (Link to the patch: https://review.openstack.org/#/c/583967/ ) 2. https://bugs.launchpad.net/neutron/+bug/1782141 - confirmed (High). QoS related. Patch proposed to clear rate limits when default NULL values are used. (Link to the patch: https://review.openstack.org/#/c/584297/) 3. https://bugs.launchpad.net/neutron/+bug/1782026 - duplicate of 1758952. Backport patch proposed/merged to stable/queens. 4. https://bugs.launchpad.net/neutron/+bug/1782337 - duplicate of 1776840. Backport patch proposed and under review https://review.openstack.org/#/c/584172/. 5. https://bugs.launchpad.net/neutron/+bug/1782421 - Under discussion. Large scale concurrent port creations fail due to revision number bumps. The submitter has had a workaround to solve his issue, but it may have side effects also. Anyone who is familiar with large scale deployments/revision numbers, please kindly join the discussion. 6. https://bugs.launchpad.net/neutron/+bug/1782576 - confirmed (High). SG logging data is not logged into /var/log/syslog. Best regards, Lujin From skaplons at redhat.com Mon Jul 23 09:20:53 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 23 Jul 2018 11:20:53 +0200 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed In-Reply-To: References: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Message-ID: Hi, Thx Artom for taking care of it. Did You made any progress? I think that it might be quite important to fix as it failed around 50 times during last 7 days: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22 > Wiadomość napisana przez Artom Lifshitz w dniu 19.07.2018, o godz. 19:28: > > I've proposed [1] to add extra logging on the Nova side. Let's see if > that helps us catch the root cause of this. > > [1] https://review.openstack.org/584032 > > On Thu, Jul 19, 2018 at 12:50 PM, Artom Lifshitz wrote: >> Because we're waiting for the volume to become available before we >> continue with the test [1], its tag still being present means Nova's >> not cleaning up the device tags on volume detach. This is most likely >> a bug. I'll look into it. >> >> [1] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L378 >> >> On Thu, Jul 19, 2018 at 7:09 AM, Slawomir Kaplonski wrote: >>> Hi, >>> >>> Since some time we see that test tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment is failing sometimes. >>> Bug about that is reported for Tempest currently [1] but after small patch [2] was merged I was today able to check what cause this issue. >>> >>> Test which is failing is in [3] and it looks that everything is going fine with it up to last line of test. So volume and port are created, attached, tags are set properly, both devices are detached properly also and at the end test is failing as in http://169.254.169.254/openstack/latest/meta_data.json still has some device inside. >>> And it looks now from [4] that it is volume which isn’t removed from this meta_data.json. >>> So I think that it would be good if people from Nova and Cinder teams could look at it and try to figure out what is going on there and how it can be fixed. >>> >>> Thanks in advance for help. >>> >>> [1] https://bugs.launchpad.net/tempest/+bug/1775947 >>> [2] https://review.openstack.org/#/c/578765/ >>> [3] https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_device_tagging.py#L330 >>> [4] http://logs.openstack.org/69/567369/15/check/tempest-full/528bc75/job-output.txt.gz#_2018-07-19_10_06_09_273919 >>> >>> — >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> -- >> Artom Lifshitz >> Software Engineer, OpenStack Compute DFG > > > > -- > -- > Artom Lifshitz > Software Engineer, OpenStack Compute DFG > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From witold.bedyk at est.fujitsu.com Mon Jul 23 09:45:03 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 23 Jul 2018 09:45:03 +0000 Subject: [openstack-dev] [self-healing] [ptg] [monasca] PTG track schedule published Message-ID: Hi Adam, if nothing else works, we could probably offer you half-day of Monasca slot on Monday or Tuesday afternoon. I'm afraid though that our room might be too small for you. Cheers Witek > -----Original Message----- > From: Thierry Carrez > Sent: Freitag, 20. Juli 2018 18:46 > To: Adam Spiers > Cc: openstack-dev mailing list > Subject: Re: [openstack-dev] [self-healing] [ptg] PTG track schedule > published > > Adam Spiers wrote: > > Apologies - I have had to change plans and leave on the Thursday > > evening (old friend is getting married on Saturday morning).  Is there > > any chance of swapping the self-healing slot with one of the others? > > It's tricky, as you asked to avoid conflicts with API SIG, Watcher, Monasca, > Masakari, and Mistral... Which day would be best for you given the current > schedule (assuming we don't move anything else as it's too late for that). > > -- > Thierry Carrez (ttx) > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rnoriega at redhat.com Mon Jul 23 09:50:38 2018 From: rnoriega at redhat.com (Ricardo Noriega De Soto) Date: Mon, 23 Jul 2018 11:50:38 +0200 Subject: [openstack-dev] [tripleo] How to integrate a Heat plugin in a containerized deployment? Message-ID: Hello guys, I need to deploy the following Neutron BGPVPN heat plugin. https://docs.openstack.org/networking-bgpvpn/ocata/heat.html This will allow users, to create Heat templates with BGPVPN resources. Right now, BGPVPN service plugin is only available in neutron-server-opendaylight Kolla image: https://github.com/openstack/kolla/blob/master/docker/neutron/neutron-server-opendaylight/Dockerfile.j2#L13 It would make sense to add right there the python-networking-bgpvpn-heat package. Is that correct? Heat exposes a parameter to configure plugins ( HeatEnginePluginDirs), that corresponds to plugins_dir parameter in heat.conf. What is the issue here? Heat will try to search any available plugin in the path determined by HeatEnginePluginDirs, however, the heat plugin is located in a separate container (neutron_api). How should we tackle this? I see no other example of this type of integration. AFAIK, /usr/lib/python2.7/site-packages is not exposed to the host as a mounted volume, so how is heat supposed to find bgpvpn heat plugin? Thanks for your advice. Cheers -- Ricardo Noriega Senior Software Engineer - NFV Partner Engineer | Office of Technology | Red Hat irc: rnoriega @freenode -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon Jul 23 10:47:33 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 23 Jul 2018 13:47:33 +0300 Subject: [openstack-dev] [tripleo] How to integrate a Heat plugin in a containerized deployment? In-Reply-To: References: Message-ID: On 7/23/18 12:50 PM, Ricardo Noriega De Soto wrote: > Hello guys, > > I need to deploy the following Neutron BGPVPN heat plugin. > > https://docs.openstack.org/networking-bgpvpn/ocata/heat.html > > > This will allow users, to create Heat templates with BGPVPN resources. > Right now, BGPVPN service plugin is only available in > neutron-server-opendaylight Kolla image: > > https://github.com/openstack/kolla/blob/master/docker/neutron/neutron-server-opendaylight/Dockerfile.j2#L13 > > > It would make sense to add right there the python-networking-bgpvpn-heat > package. Is that correct? Heat exposes a parameter to configure plugins You can override that via neutron_server_opendaylight_packages_append in tripleo common, like [0] [0] http://git.openstack.org/cgit/openstack/tripleo-common/tree/container-images/tripleo_kolla_template_overrides.j2#n76 > ( HeatEnginePluginDirs), that corresponds to plugins_dir parameter in > heat.conf. > > What is the issue here? > > Heat will try to search any available plugin in the path determined by > HeatEnginePluginDirs, however, the heat plugin is located in a separate > container (neutron_api). How should we tackle this? I see no other > example of this type of integration. Here is the most recent example [1] of inter-containers state sharing for Ironic containers. I think something similar should be done for docker/services/heat* yaml files. [1] https://review.openstack.org/#/c/584265/ > > AFAIK, /usr/lib/python2.7/site-packages is not exposed to the host as a > mounted volume, so how is heat supposed to find bgpvpn heat plugin? > > Thanks for your advice. > > Cheers > > > -- > Ricardo Noriega > > Senior Software Engineer - NFV Partner Engineer | Office of Technology >  | Red Hat > irc: rnoriega @freenode > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From balazs.gibizer at ericsson.com Mon Jul 23 13:58:10 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 23 Jul 2018 15:58:10 +0200 Subject: [openstack-dev] [nova]Notification update week 30 Message-ID: <1532354290.11749.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- No new bugs tagged with notifications and no progress with the existing ones. Features -------- Versioned notification transformation ------------------------------------- We have only a handfull of patches left before we can finally finish the multi year effort of transforming every legacy notifiaction to the versioned format. 3 of those patches already have a +2: https://review.openstack.org/#/q/status:open+topic:bp/versioned-notification-transformation-rocky Weekly meeting -------------- No meeting this week. Please ping me on IRC if you have something important to talk about. Cheers, gibi From aschultz at redhat.com Mon Jul 23 14:02:14 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 23 Jul 2018 08:02:14 -0600 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: +1 On Fri, Jul 20, 2018 at 2:07 AM, Carlos Camacho Gonzalez wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the > TripleO upgrades bits. He shows a constant and active involvement in > improving and fixing our updates/upgrades workflows, he helps also trying to > develop/improve/fix our upstream support for testing the updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) > > [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com > [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa > > Cheers, > Carlos. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bodenvmw at gmail.com Mon Jul 23 14:19:54 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 23 Jul 2018 08:19:54 -0600 Subject: [openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky Message-ID: <5321591d-9b81-062e-319b-0aa674402198@gmail.com> If you're a networking project that uses neutron/neutron-lib, please read on. We recently created the stable/rocky branch for neutron-lib based on neutron-lib 1.18.0 and neutron is now using 1.18.0 as well [1]. If you're a networking project that depends on (uses) neutron/master then it's probably best your project is also using 1.18.0. Action items if you project uses neutron/master for Rocky: - If your project is covered in the existing patches to use neutron-lib 1.18.0 [2], please help verify/review. - If your project is not covered in [2], please update your requirements to use neutron-lib 1.18.0 in prep for Rocky. If you run into any issues with neutron-lib 1.18.0 please report them immediately and/or find me on #openstack-neutron Thanks [1] https://review.openstack.org/#/c/583671/ [2] https://review.openstack.org/#/q/topic:rocky-neutronlib From james.page at canonical.com Mon Jul 23 16:01:39 2018 From: james.page at canonical.com (James Page) Date: Mon, 23 Jul 2018 17:01:39 +0100 Subject: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? Message-ID: Hi All tl;dr we (the original founders) have not managed to invest the time to get the Upgrades SIG booted - time to hit reboot or time to poweroff? Since Vancouver, two of the original SIG chairs have stepped down leaving me in the hot seat with minimal participation from either deployment projects or operators in the IRC meetings. In addition I've only been able to make every 3rd IRC meeting, so they have generally not being happening. I think the current timing is not good for a lot of folk so finding a better slot is probably a must-have if the SIG is going to continue - and maybe moving to a monthly or bi-weekly schedule rather than the weekly slot we have now. In addition I need some willing folk to help with leadership in the SIG. If you have an interest and would like to help please let me know! I'd also like to better engage with all deployment projects - upgrades is something that deployment tools should be looking to encapsulate as features, so it would be good to get deployment projects engaged in the SIG with nominated representatives. Based on the attendance in upgrades sessions in Vancouver and developer/operator appetite to discuss all things upgrade at said sessions I'm assuming that there is still interest in having a SIG for Upgrades but I may be wrong! Thoughts? James -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Mon Jul 23 16:50:57 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Mon, 23 Jul 2018 18:50:57 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: References: Message-ID: <11f43075-f885-5b02-ad7e-39ce55f26a9e@redhat.com> +1! On 20.7.2018 10:07, Carlos Camacho Gonzalez wrote: > Hi!!! > > I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the > TripleO upgrades bits. He shows a constant and active involvement in > improving and fixing our updates/upgrades workflows, he helps also trying > to develop/improve/fix our upstream support for testing the > updates/upgrades. > > Please vote -1/+1, and consider this my +1 vote :) > > [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com > [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa > > Cheers, > Carlos. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Mon Jul 23 18:33:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 23 Jul 2018 14:33:33 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions Message-ID: Thanks Monty for pointing that out to me today on #ansible-devel. Context: https://github.com/ansible/ansible/pull/41811 The top-level fact vars are currently being deprecated in Ansible, maybe 2.7. It looks like it only affects tripleo-validations (in a quick look), but it could be more. See: http://codesearch.openstack.org/?q=ansible_facts&i=nope&files=&repos= An example playbook was written to explain what is deprecated: https://github.com/ansible/ansible/pull/41811#issuecomment-399220997 But it seems like, starting with Ansible 2.5 (what we already have in Rocky and beyond), we should encourage the usage of ansible_facts dictionary. Example: var=hostvars[inventory_hostname].ansible_facts.hostname instead of: var=ansible_hostname Can we have someone from TripleO Validations to help, and make sure we make it working for future versions of Ansible. Also there is a way to test this behavior by disabling the 'inject_facts_as_vars' option in ansible.cfg. Hope this helps, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Jul 23 18:53:21 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 23 Jul 2018 12:53:21 -0600 Subject: [openstack-dev] [StoryBoard] issues found while using storyboard Message-ID: <5B562421.4040001@windriver.com> Hi, I'm on a team that is starting to use StoryBoard, and I just thought I'd raise some issues I've recently run into. It may be that I'm making assumptions based on previous tools that I've used (Launchpad and Atlassian's Jira) and perhaps StoryBoard is intended to be used differently, so if that's the case please let me know. 1) There doesn't seem to be a formal way to search for newly-created stories that have not yet been triaged. 2) There doesn't seem to be a way to find stories/tasks using arbitrary boolean logic, for example something of the form "(A OR (B AND C)) AND NOT D". Automatic worklists will only let you do "(A AND B) OR (C AND D) OR (E AND F)" and story queries won't even let you do that. 3) I don't see a structured way to specify that a bug has been confirmed by someone other than the reporter, or how many people have been impacted by it. 4) I can't find a way to add attachments to a story. (Like a big log file, or a proposed patch, or a screenshot.) 5) I don't see a way to search for stories that have not been assigned to someone. 6) This is more a convenience thing, but when looking at someone else's public automatic worklist, there's no way to see what the query terms were that generated the worklist. Chris From jungleboyj at gmail.com Mon Jul 23 19:07:09 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 23 Jul 2018 14:07:09 -0500 Subject: [openstack-dev] [StoryBoard] issues found while using storyboard In-Reply-To: <5B562421.4040001@windriver.com> References: <5B562421.4040001@windriver.com> Message-ID: <318a05f5-2082-7400-0790-9e82bd54800c@gmail.com> On 7/23/2018 1:53 PM, Chris Friesen wrote: > Hi, > > I'm on a team that is starting to use StoryBoard, and I just thought > I'd raise some issues I've recently run into.  It may be that I'm > making assumptions based on previous tools that I've used (Launchpad > and Atlassian's Jira) and perhaps StoryBoard is intended to be used > differently, so if that's the case please let me know. > > 1) There doesn't seem to be a formal way to search for newly-created > stories that have not yet been triaged. > > 2) There doesn't seem to be a way to find stories/tasks using > arbitrary boolean logic, for example something of the form "(A OR (B > AND C)) AND NOT D". Automatic worklists will only let you do "(A AND > B) OR (C AND D) OR (E AND F)" and story queries won't even let you do > that. > > 3) I don't see a structured way to specify that a bug has been > confirmed by someone other than the reporter, or how many people have > been impacted by it. > > 4) I can't find a way to add attachments to a story.  (Like a big log > file, or a proposed patch, or a screenshot.) Chris, Tom Barron and I have both raised this as a concern for Cinder and Manila.  I could not find a bug for not being able to create attachments so I have created one: https://storyboard.openstack.org/#!/story/2003071 Jay > > 5) I don't see a way to search for stories that have not been assigned > to someone. > > 6) This is more a convenience thing, but when looking at someone > else's public automatic worklist, there's no way to see what the query > terms were that generated the worklist. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Jul 23 19:20:59 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Jul 2018 14:20:59 -0500 Subject: [openstack-dev] [release][ptl] Deadlines this week Message-ID: <20180723192058.GA15416@sm-workstation> Just a quick reminder that this week is a big one for deadlines. This Thursday, July 26, is our scheduled deadline for feature freeze, soft string freeze, client library freeze, and requirements freeze. String freeze is necessary to give our i18n team a chance at translating error strings. You are highly encouraged not to accept proposed changes containing modifications in user-facing strings (with consideration for important bug fixes of course). Such changes should be rejected by the review team and postponed until the next series development opens (which should happen when RC1 is published). The other freezes are to allow library changes and other code churn to settle down before we get to RC1. Import feature freeze exceptions should be requested from the project's PTL for them to decide if the risk is low enough to allow changes to still be accepted. Requirements updates will need a feature freeze exception from the requirements team. Those should be requested by sending a request to openstack-dev with the subject line containing "[requirements][ffe]". For more details, please refer to our published Rocky release schedule: https://releases.openstack.org/rocky/schedule.html Thanks! Sean From fm577c at att.com Mon Jul 23 19:22:47 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Mon, 23 Jul 2018 19:22:47 +0000 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins Message-ID: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> Hi, ** Intention ** Intention is to expand Patrole testing to some service clients that already exist in some Tempest plugins, for core services only. ** Background ** Digging through Neutron testing, it seems like there is currently a lot of test duplication between neutron-tempest-plugin and Tempest [1]. Under some circumstances it seems OK to have redundant testing/parallel testing: "Having potential duplication between testing is not a big deal especially compared to the alternative of removing something which is actually providing value and is actively catching bugs, or blocking incorrect patches from landing" [2]. This leads me to the following question: If API test duplication is OK, what about service client duplication? Patches like [3] and [4] promote service client duplication with neutron-tempest-plugin. As far as I can tell, Neutron builds out some of its service clients dynamically here: [5]. Which includes segments service client (proposed as an addition to tempest.lib in [4]) here: [6]. This leads to a situation where if we want to offer RBAC testing for these APIs (to validate their policy enforcement), we can't really do so without adding the service client to Tempest, unless we rely on the neutron-tempest-plugin (for example) in Patrole's .zuul.yaml. ** Path Forward ** Option #1: For the core services, most service clients should live in tempest.lib for standardization/governance around documentation and stability for those clients. Service client duplication should try to be minimized as much as possible. API testing related to some service clients, though, should remain in the Tempest plugins. Option #2: Proceed with service client duplication, either by adding the service client to Tempest (or as yet another alternative, Patrole). This leads to maintenance overhead: have to maintain service clients in the plugins and Tempest itself. Option #3: Don't offer RBAC testing in Patrole plugin for those APIs. Thanks, Felipe [1] https://bugs.launchpad.net/neutron/+bug/1552960 [2] https://docs.openstack.org/tempest/latest/test_removal.html [3] https://review.openstack.org/#/c/482395/ [4] https://review.openstack.org/#/c/582340/ [5] http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/services/network/json/network_client.py [6] http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/api/test_timestamp.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Jul 23 19:25:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Jul 2018 14:25:40 -0500 Subject: [openstack-dev] [nova] Is the XenProject CI dead? Message-ID: <384b34c9-23d7-0a99-0f7b-e6275a8b92f1@gmail.com> We have the XenProject CI [1] which is supposed to run the libvirt+xen configuration. But I haven't seen it run on this libvirt driver change [2]. Does anyone know about its status? [1] https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI [2] https://review.openstack.org/#/c/560317/ -- Thanks, Matt From mriedemos at gmail.com Mon Jul 23 21:57:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Jul 2018 16:57:16 -0500 Subject: [openstack-dev] [Openstack-operators][nova] Couple of CellsV2 questions In-Reply-To: References: Message-ID: <5eb59ccc-f860-b15c-7ed8-e1a04807adb7@gmail.com> I'll try to help a bit inline. Also cross-posting to openstack-dev and tagging with [nova] to highlight it. On 7/23/2018 10:43 AM, Jonathan Mills wrote: > I am looking at implementing CellsV2 with multiple cells, and there's a > few things I'm seeking clarification on: > > 1) How does a superconductor know that it is a superconductor?  Is its > operation different in any fundamental way?  Is there any explicit > configuration or a setting in the database required? Or does it simply > not care one way or another? It's a topology term, not really anything in config or the database that distinguishes the "super" conductor. I assume you've gone over the service layout in the docs: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html#service-layout There are also some summit talks from Dan about the topology linked here: https://docs.openstack.org/nova/latest/user/cells.html#cells-v2 The superconductor is the conductor service at the "top" of the tree which interacts with the API and scheduler (controller) services and routes operations to the cell. Then once in a cell, the operation should ideally be confined there. So, for example, reschedules during a build would be confined to the cell. The cell conductor doesn't go back "up" to the scheduler to get a new set of hosts for scheduling. This of course depends on which release you're using and your configuration, see the caveats section in the cellsv2-layout doc. > > 2) When I ran the command "nova-manage cell_v2 create_cell --name=cell1 > --verbose", the entry created for cell1 in the api database includes > only one rabbitmq server, but I have three of them as an HA cluster. > Does it only support talking to one rabbitmq server in this > configuration? Or can I just update the cell1 transport_url in the > database to point to all three? Is that a supported configuration? First, don't update stuff directly in the database if you don't have to. :) What you set on the transport_url should be whatever oslo.messaging can handle: https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.transport_url There is at least one reported bug for this but I'm not sure I fully grok it or what its status is at this point: https://bugs.launchpad.net/nova/+bug/1717915 > > 3) Is there anything wrong with having one cell share the amqp bus with > your control plane, while having additional cells use their own amqp > buses? Certainly I realize that the point of CellsV2 is to shard the > amqp bus for greater horizontal scalability.  But in my case, my first > cell is on the smaller side, and happens to be colocated with the > control plane hardware (whereas other cells will be in other parts of > the datacenter, or in other datacenters with high-speed links).  I was > thinking of just pointing that first cell back at the same rabbitmq > servers used by the control plane, but perhaps directing them at their > own rabbitmq vhost. Is that a terrible idea? Would need to get input from operators and/or Dan Smith's opinion on this one, but I'd say it's no worse than having a flat single cell deployment. However, if you're going to do multi-cell long-term anyway, then it would be best to get in the mindset and discipline of not relying on shared MQ between the controller services and the cells. In other words, just do the right thing from the start rather than have to worry about maybe changing the deployment / configuration for that one cell down the road when it's harder. -- Thanks, Matt From gmann at ghanshyammann.com Tue Jul 24 02:38:30 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 24 Jul 2018 11:38:30 +0900 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins In-Reply-To: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> References: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: <164ca27003f.b18a531780446.7064526011429840442@ghanshyammann.com> ---- On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C wrote ---- > Hi, > > ** Intention ** > Intention is to expand Patrole testing to some service clients that already exist in some Tempest plugins, for core services only. > > ** Background ** > Digging through Neutron testing, it seems like there is currently a lot of test duplication between neutron-tempest-plugin and Tempest [1]. Under some circumstances it seems OK to have redundant testing/parallel testing: “Having potential duplication between testing is not a big deal especially compared to the alternative of removing something which is actually providing value and is actively catching bugs, or blocking incorrect patches from landing” [2]. We really need to minimize the test duplication. If there is test in tempest plugin for core services then, we do not need to add those in Tempest repo until it is interop requirement. This is for new tests so we can avoid the duplication in future. I will write this in Tempest reviewer guide. For existing duplicate tests, as per bug you mentioned[1] we need to cleanup the duplicate tests and they should live in their respective repo(either in neutron tempest plugin or tempest) which is categorized in etherpad[7]. How many tests are duplicated now? I will plan this as one of cleanup working item in stein. > > This leads me to the following question: If API test duplication is OK, what about service client duplication? Patches like [3] and [4] promote service client duplication with neutron-tempest-plugin. As far as I can tell, Neutron builds out some of its service clients dynamically here: [5]. Which includes segments service client (proposed as an addition to tempest.lib in [4]) here: [6]. Yeah, they are very dynamic in neutron plugins and its because of old legacy code. That is because when neutron tempest plugin was forked from Tempest as it is. These dynamic generation of service clients are really hard to debug and maintain. This can easily lead to backward incompatible changes if we make those service clients stable interface to consume outside. For those reason, we did fixed those in Tempest 3 years back [8] and made them static and consistent service client methods like other services clients. > > This leads to a situation where if we want to offer RBAC testing for these APIs (to validate their policy enforcement), we can’t really do so without adding the service client to Tempest, unless we rely on the neutron-tempest-plugin (for example) in Patrole’s .zuul.yaml. > > ** Path Forward ** > Option #1: For the core services, most service clients should live in tempest.lib for standardization/governance around documentation and stability for those clients. Service client duplication should try to be minimized as much as possible. API testing related to some service clients, though, should remain in the Tempest plugins. > > Option #2: Proceed with service client duplication, either by adding the service client to Tempest (or as yet another alternative, Patrole). This leads to maintenance overhead: have to maintain service clients in the plugins and Tempest itself. > > Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs. We need to share the service clients among Tempest plugins. And each service clients which are being shared across repo has to be declared as stable interface like Tempest does. Idea here is service clients will live in the repo where their original tests were added or going to be added. For example in case of neutron tempest plugin, if rbac-policy API tests are in neutron then its service client needs to be owned by neutron-tempest-plugin. further rbac-policy service client can be consumed by Patrole. It is same case for congress tempest plugin, where they consume mistral service client. I recommended the same in that thread also of using service client from Mistral and Mistral make the service client as stable interface [9]. Which is being done in congress[10] Here are the general recommendation for Tempest Plugins for service clients : - Tempest Plugins should make their service clients as stable interface which gives 2 advantage: 1. By this you make sure that you are not allowing to change the API calling interface(service clietns) which indirectly means you are not allowing to change the APIs. Makes your tempest plugin testing more reliable. 2. Your service clients can be used in other Tempest plugins to avoid duplicate code/interface. If any other plugins use you service clients means, they also test your project so it is good to help them by providing the required interface as stable. Initial idea of owning the service clients in their respective plugins was to share them among plugins for integrated testing of more then one openstack service. - Usage of service clients across repo, Tempest provide a better way to do so than importing them directly [11]. You can see the example for Manila's tempest plugin [12]. This gives an advantage of discovering your registered service clients in other Tempest plugins automatically. I think its wroth to write as Doc in Tempest for Recommendation to Tempest Plugins. I will write one later this week. Now back to current question of Patrole, Let's check with neutron tempest plugin team about implementing the above recommendation and use the service client from there instead of duplicating it in Tempest. We should consume the service clients from neutron plugin and tempest where ever they live. How about below plan: Step 1. Neutron tempest plugin team declaring service client as stable interface which means no backward incompatible change. Step 2. Patrole import those service clients from neutron plugin as of now and proceed with testing. Step 3. Later neutron tempest plugin expose service clients via service client registration so that their service clients can be discovered automatically than importing them. Same way Tempest does. [7] https://etherpad.openstack.org/p/neutron-tempest-defork [8] https://review.openstack.org/#/q/status:merged+project:openstack/tempest+branch:master+topic:refactor_neutron_client [9] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128483.html [10] https://github.com/openstack/congress-tempest-plugin/blob/master/congress_tempest_plugin/tests/scenario/manager_congress.py#L85 [11] https://docs.openstack.org/tempest/latest/plugin.html#get_service_clients() [12] https://review.openstack.org/#/c/334596/ -gmann > > Thanks, > > Felipe > > [1] https://bugs.launchpad.net/neutron/+bug/1552960 > [2] https://docs.openstack.org/tempest/latest/test_removal.html > [3] https://review.openstack.org/#/c/482395/ > [4] https://review.openstack.org/#/c/582340/ > [5] http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/services/network/json/network_client.py > [6] http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/api/test_timestamp.py > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sangho at opennetworking.org Tue Jul 24 03:46:37 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Tue, 24 Jul 2018 11:46:37 +0800 Subject: [openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky In-Reply-To: <5321591d-9b81-062e-319b-0aa674402198@gmail.com> References: <5321591d-9b81-062e-319b-0aa674402198@gmail.com> Message-ID: <73501776-0985-41A2-9D07-895E94EA2ACD@opennetworking.org> Hello Boden, Thank you for your notification. It applies also to the networking-xxxx projects. Right? Thank you, Sangho > 2018. 7. 23. 오후 10:19, Boden Russell 작성: > > If you're a networking project that uses neutron/neutron-lib, please > read on. > > We recently created the stable/rocky branch for neutron-lib based on > neutron-lib 1.18.0 and neutron is now using 1.18.0 as well [1]. If > you're a networking project that depends on (uses) neutron/master then > it's probably best your project is also using 1.18.0. > > Action items if you project uses neutron/master for Rocky: > - If your project is covered in the existing patches to use neutron-lib > 1.18.0 [2], please help verify/review. > - If your project is not covered in [2], please update your requirements > to use neutron-lib 1.18.0 in prep for Rocky. > > If you run into any issues with neutron-lib 1.18.0 please report them > immediately and/or find me on #openstack-neutron > > Thanks > > [1] https://review.openstack.org/#/c/583671/ > [2] https://review.openstack.org/#/q/topic:rocky-neutronlib > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cjeanner at redhat.com Tue Jul 24 04:44:57 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 24 Jul 2018 06:44:57 +0200 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: References: Message-ID: <18a0510d-6dd7-1546-0f41-d17f22065957@redhat.com> On 07/23/2018 08:33 PM, Emilien Macchi wrote: > Thanks Monty for pointing that out to me today on #ansible-devel. > > Context: https://github.com/ansible/ansible/pull/41811 > The top-level fact vars are currently being deprecated in Ansible, maybe > 2.7. > It looks like it only affects tripleo-validations (in a quick look), but > it could be more. > See: http://codesearch.openstack.org/?q=ansible_facts&i=nope&files=&repos= > > An example playbook was written to explain what is deprecated: > https://github.com/ansible/ansible/pull/41811#issuecomment-399220997 > > But it seems like, starting with Ansible 2.5 (what we already have in > Rocky and beyond), we should encourage the usage of ansible_facts > dictionary. > Example: > var=hostvars[inventory_hostname].ansible_facts.hostname > instead of: > var=ansible_hostname guh.... I'm sorry, but this is a non-sense, ugly as hell, and will just make things overcomplicated as sh*t. Like, really. I know we can't really have a word about that kind of decision, but... damn, WHY ?! Thanks for the heads-up though - will patch my current disk space validation update in order to take that into account. > > Can we have someone from TripleO Validations to help, and make sure we > make it working for future versions of Ansible. > Also there is a way to test this behavior by disabling the > 'inject_facts_as_vars' option in ansible.cfg. > > Hope this helps, > -- > Emilien Macchi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zhipengh512 at gmail.com Tue Jul 24 06:58:59 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 24 Jul 2018 14:58:59 +0800 Subject: [openstack-dev] [publiccloud-wg]New Meeting Time Starting This Week Message-ID: Hi Folks, As indicated in https://review.openstack.org/#/c/584389/, PCWG is moving towards a tick-tock meeting arrangements to better accommodate participants along the globe. For even weeks starting this Wed, we will have a new meeting time on UTC0700. For odd weeks we will remain for the UTC1400 time slot. Look forward to meet you all at #openstack-publiccloud on Wed ! -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue Jul 24 08:57:27 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 24 Jul 2018 11:57:27 +0300 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: References: Message-ID: On 7/23/18 9:33 PM, Emilien Macchi wrote: > But it seems like, starting with Ansible 2.5 (what we already have in > Rocky and beyond), we should encourage the usage of ansible_facts > dictionary. > Example: > var=hostvars[inventory_hostname].ansible_facts.hostname > instead of: > var=ansible_hostname If that means rewriting all ansible_foo things around the globe, we'd have a huge scope for changes. Those are used literally everywhere. Here is only a search for tripleo-quickstart [0] [0] http://codesearch.openstack.org/?q=%5B%5C.%27%22%5Dansible_%5CS%2B%5B%5E%3A%5D&i=nope&files=roles&repos=tripleo-quickstart -- Best regards, Bogdan Dobrelya, Irc #bogdando From paul.bourke at oracle.com Tue Jul 24 09:34:56 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 24 Jul 2018 10:34:56 +0100 Subject: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: References: Message-ID: <2efb90a8-ad03-8400-ac8b-05878f903ffe@oracle.com> Hi James, Sorry to hear about the lack of participation. I for one am guilty of not taking part, there just seems to be never enough time in the day to cram in all the moving parts that a project like OpenStack requires. That being said, this effort is definitely one of the most important to the project imo, so I'm keen to step up. Moving to a monthly meeting sounds a good idea, at least till things get back on foot. Could you share what the current times / location for the meeting is? Cheers, -Paul On 23/07/18 17:01, James Page wrote: > Hi All > > tl;dr we (the original founders) have not managed to invest the time to > get the Upgrades SIG booted - time to hit reboot or time to poweroff? > > Since Vancouver, two of the original SIG chairs have stepped down > leaving me in the hot seat with minimal participation from either > deployment projects or operators in the IRC meetings.  In addition I've > only been able to make every 3rd IRC meeting, so they have generally not > being happening. > > I think the current timing is not good for a lot of folk so finding a > better slot is probably a must-have if the SIG is going to continue - > and maybe moving to a monthly or bi-weekly schedule rather than the > weekly slot we have now. > > In addition I need some willing folk to help with leadership in the > SIG.  If you have an interest and would like to help please let me know! > > I'd also like to better engage with all deployment projects - upgrades > is something that deployment tools should be looking to encapsulate as > features, so it would be good to get deployment projects engaged in the > SIG with nominated representatives. > > Based on the attendance in upgrades sessions in Vancouver and > developer/operator appetite to discuss all things upgrade at said > sessions I'm assuming that there is still interest in having a SIG for > Upgrades but I may be wrong! > > Thoughts? > > James > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lijie at unitedstack.com Tue Jul 24 10:07:24 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Tue, 24 Jul 2018 18:07:24 +0800 Subject: [openstack-dev] [cinder] about block device driver Message-ID: Hi,all In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py However,I want to use it out of tree,but I don't know how to use it out of tree,Can you share me a doc? Thank you very much! Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Tue Jul 24 11:09:37 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 24 Jul 2018 12:09:37 +0100 Subject: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach In-Reply-To: References: <20180717215359.GA31698@sm-workstation> <20180718090227.thr2kb2336vptaos@localhost> Message-ID: <20180724110937.ztqwywirfjnyeadr@lyarwood.usersys.redhat.com> On 20-07-18 08:10:37, Erlon Cruz wrote: > Nice, good to know. Thanks all for the feedback. We will fix that in our > drivers. FWIW Nova does not and AFAICT never has called os-force_detach. We previously used os-terminate_connection with v2 where the connector was optional. Even then we always provided one, even providing the destination connector during an evacuation when the source connector wasn't stashed in connection_info. > @Walter, so, in this case, if Cinder has the connector, it should not need > to call the driver passing a None object right? Yeah I don't think this is an issue with v3 given the connector is stashed with the attachment, so all we require is a reference to the attachment to cleanup the connection during evacuations etc. Lee > Erlon > > Em qua, 18 de jul de 2018 às 12:56, Walter Boring > escreveu: > > > The whole purpose of this test is to simulate the case where Nova doesn't > > know where the vm is anymore, > > or may simply not exist, but we need to clean up the cinder side of > > things. That being said, with the new > > attach API, the connector is being saved in the cinder database for each > > volume attachment. > > > > Walt > > > > On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor > > wrote: > > > >> On 17/07, Sean McGinnis wrote: > >> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote: > >> > > Hi Cinder and Nova folks, > >> > > > >> > > Working on some tests for our drivers, I stumbled upon this tempest > >> test > >> > > 'force_detach_volume' > >> > > that is calling Cinder API passing a 'None' connector. At the time > >> this was > >> > > added several CIs > >> > > went down, and people started discussing whether this > >> (accepting/sending a > >> > > None connector) > >> > > would be the proper behavior for what is expected to a driver to > >> do[1]. So, > >> > > some of CIs started > >> > > just skipping that test[2][3][4] and others implemented fixes that > >> made the > >> > > driver to disconnected > >> > > the volume from all hosts if a None connector was received[5][6][7]. > >> > > >> > Right, it was determined the correct behavior for this was to > >> disconnect the > >> > volume from all hosts. The CIs that are skipping this test should stop > >> doing so > >> > (once their drivers are fixed of course). > >> > > >> > > > >> > > While implementing this fix seems to be straightforward, I feel that > >> just > >> > > removing the volume > >> > > from all hosts is not the correct thing to do mainly considering that > >> we > >> > > can have multi-attach. > >> > > > >> > > >> > I don't think multiattach makes a difference here. Someone is forcibly > >> > detaching the volume and not specifying an individual connection. So > >> based on > >> > that, Cinder should be removing any connections, whether that is to one > >> or > >> > several hosts. > >> > > >> > >> Hi, > >> > >> I agree with Sean, drivers should remove all connections for the volume. > >> > >> Even without multiattach there are cases where you'll have multiple > >> connections for the same volume, like in a Live Migration. > >> > >> It's also very useful when Nova and Cinder get out of sync and your > >> volume has leftover connections. In this case if you try to delete the > >> volume you get a "volume in use" error from some drivers. > >> > >> Cheers, > >> Gorka. > >> > >> > >> > > So, my questions are: What is the best way to fix this problem? Should > >> > > Cinder API continue to > >> > > accept detachments with None connectors? If, so, what would be the > >> effects > >> > > on other Nova > >> > > attachments for the same volume? Is there any side effect if the > >> volume is > >> > > not multi-attached? > >> > > > >> > > Additionally to this thread here, I should bring this topic to > >> tomorrow's > >> > > Cinder's meeting, > >> > > so please join if you have something to share. > >> > > > >> > > >> > +1 - good plan. > >> > > >> > > >> > > >> __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From Jean-Philippe at evrard.me Tue Jul 24 11:12:52 2018 From: Jean-Philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 24 Jul 2018 11:12:52 +0000 Subject: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff? In-Reply-To: <2efb90a8-ad03-8400-ac8b-05878f903ffe@oracle.com> References: <2efb90a8-ad03-8400-ac8b-05878f903ffe@oracle.com> Message-ID: <3704F56E-F4BC-4738-9A07-C3B78777E743@evrard.me> Sorry about the lack of participation too. Monthly sounds good. Regards, JP On July 24, 2018 9:34:56 AM UTC, Paul Bourke wrote: >Hi James, > >Sorry to hear about the lack of participation. I for one am guilty of >not taking part, there just seems to be never enough time in the day to > >cram in all the moving parts that a project like OpenStack requires. > >That being said, this effort is definitely one of the most important to > >the project imo, so I'm keen to step up. > >Moving to a monthly meeting sounds a good idea, at least till things >get >back on foot. Could you share what the current times / location for the > >meeting is? > >Cheers, >-Paul > >On 23/07/18 17:01, James Page wrote: >> Hi All >> >> tl;dr we (the original founders) have not managed to invest the time >to >> get the Upgrades SIG booted - time to hit reboot or time to poweroff? >> >> Since Vancouver, two of the original SIG chairs have stepped down >> leaving me in the hot seat with minimal participation from either >> deployment projects or operators in the IRC meetings.  In addition >I've >> only been able to make every 3rd IRC meeting, so they have generally >not >> being happening. >> >> I think the current timing is not good for a lot of folk so finding a > >> better slot is probably a must-have if the SIG is going to continue - > >> and maybe moving to a monthly or bi-weekly schedule rather than the >> weekly slot we have now. >> >> In addition I need some willing folk to help with leadership in the >> SIG.  If you have an interest and would like to help please let me >know! >> >> I'd also like to better engage with all deployment projects - >upgrades >> is something that deployment tools should be looking to encapsulate >as >> features, so it would be good to get deployment projects engaged in >the >> SIG with nominated representatives. >> >> Based on the attendance in upgrades sessions in Vancouver and >> developer/operator appetite to discuss all things upgrade at said >> sessions I'm assuming that there is still interest in having a SIG >for >> Upgrades but I may be wrong! >> >> Thoughts? >> >> James >> >> >> >> >> >> >> >__________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: >OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Tue Jul 24 11:42:25 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 24 Jul 2018 12:42:25 +0100 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins In-Reply-To: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> References: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: <58193b20-a1f7-97ed-82ef-b81de8533331@ham.ie> On 23/07/2018 20:22, MONTEIRO, FELIPE C wrote: > Hi, > > ** Intention ** > > Intention is to expand Patrole testing to some service clients that > already exist in some Tempest plugins, for core services only. What exact projects does Patrole consider "core", and how are you making that decision? Is it a tag, InterOp, or some other criteria? From sean.mcginnis at gmx.com Tue Jul 24 13:31:25 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 24 Jul 2018 08:31:25 -0500 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: References: Message-ID: <20180724133125.GA26723@sm-workstation> On Tue, Jul 24, 2018 at 06:07:24PM +0800, Rambo wrote: > Hi,all > > > In the Cinder repository, I noticed that the BlockDeviceDriver driver is being deprecated, and was eventually be removed with the Queens release. > > > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py > > > However,I want to use it out of tree,but I don't know how to use it out of tree,Can you share me a doc? Thank you very much! > I don't think we have any community documentation on how to use out of tree drivers, but it's fairly straightforward. You can just drop in that block_device.py file in the cinder/volumes/drivers directory and configure its use in cinder.conf using the same volume_driver setting as before. I'm not sure if anything has been changed since Ocata that would require updates to the driver, but I would expect most base functionality should still work. But just a word of warning that there may be some updates to the driver needed if you find issues with it. Sean From cdent+os at anticdent.org Tue Jul 24 13:51:49 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 24 Jul 2018 14:51:49 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-30 Message-ID: HTML: https://anticdent.org/tc-report-18-30.html Yet another slow week at TC office hours. This is part of the normal ebb and flow of work, especially with feature freeze looming, but for some reason it bothers me. It reinforces my fears that the TC is either not particularly relevant or looking at the wrong things. Help make sure we are looking at the right things by: * coming to office hours and telling us what matters * responding to these reports and the ones that Doug produces * adding something to the [PTG planning etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-19.log.html#t2018-07-19T15:07:31) there was some discussion about forthcoming elections. First up are PTL elections for Stein. Note that it is quite likely that _if_ (as far as I can tell there's not much if about it, it is going to happen, sadly there's not very much transparency on these decisions and discussions, I wish there were) the Denver PTG is the last standalone PTG, then the Stein cycle may be longer than normal to sync up with summit schedules. [On Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-20.log.html#t2018-07-20T14:14:12) there was a bit of discussion on progress towards upgrading to Mailman 3 and using that as an opportunity to shrink the number of mailing lists. By having fewer, the hope is that some of the boundaries between groups within the community will be more permeable and will help email be the reliable information sharing mechanism. [This morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-24.log.html#t2018-07-24T12:08:03) there was yet more discussion about differences of opinion and approach when it comes to accepting projects to be official OpenStack projects. This is something that will be discussed at the PTG. It would be helpful if people who care about this could make their positions known. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From ashlee at openstack.org Tue Jul 24 14:23:40 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 24 Jul 2018 09:23:40 -0500 Subject: [openstack-dev] OpenStack Summit Berlin - Community Voting Open Message-ID: <5D259863-CF2D-4D2C-B85C-4C029D686D75@openstack.org> Hi everyone, Session voting is now open for the November 2018 OpenStack Summit in Berlin! VOTE HERE Hurry, voting closes Thursday, July 26 at 11:59pm Pacific Time (Friday, July 27 at 6:59 UTC). The Programming Committees will ultimately determine the final schedule. Community votes are meant to help inform the decision, but are not considered to be the deciding factor. The Programming Committee members exercise judgment in their area of expertise and help ensure diversity. View full details of the session selection process here. Continue to visit https://www.openstack.org/summit/berlin-2018 for all Summit-related information. REGISTER Register for the Summit before prices increase in late August! VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM August 30 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (August 31 at 6:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin.lu at huawei.com Tue Jul 24 15:34:58 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Tue, 24 Jul 2018 15:34:58 +0000 Subject: [openstack-dev] [Ironic][Octavia][Congress] The usage of Neutron API Message-ID: <0957CD8F4B55C0418161614FEC580D6B2F9B9AC6@YYZEML701-CHM.china.huawei.com> Hi folks, Neutron has landed a patch to enable strict validation on query parameters when listing resources [1]. I tested the Neutorn's change in your project's gate and the result suggested that your projects would need the fixes [2][3][4] to keep the gate functioning. Please feel free to reach out if there is any question or concern. [1] https://review.openstack.org/#/c/574907/ [2] https://review.openstack.org/#/c/583990/ [3] https://review.openstack.org/#/c/584000/ [4] https://review.openstack.org/#/c/584112/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Tue Jul 24 15:52:20 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Tue, 24 Jul 2018 17:52:20 +0200 Subject: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits In-Reply-To: <11f43075-f885-5b02-ad7e-39ce55f26a9e@redhat.com> References: <11f43075-f885-5b02-ad7e-39ce55f26a9e@redhat.com> Message-ID: ++1 On Mon, Jul 23, 2018 at 6:50 PM, Jiří Stránský wrote: > +1! > > > On 20.7.2018 10:07, Carlos Camacho Gonzalez wrote: >> >> Hi!!! >> >> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the >> TripleO upgrades bits. He shows a constant and active involvement in >> improving and fixing our updates/upgrades workflows, he helps also trying >> to develop/improve/fix our upstream support for testing the >> updates/upgrades. >> >> Please vote -1/+1, and consider this my +1 vote :) >> >> [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com >> [2]: http://stackalytics.com/?release=all&metric=commits&user_id=jfrancoa >> >> Cheers, >> Carlos. >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk From dmellado at redhat.com Tue Jul 24 15:55:37 2018 From: dmellado at redhat.com (Daniel Mellado Area) Date: Tue, 24 Jul 2018 17:55:37 +0200 Subject: [openstack-dev] [opentack-dev][kuryr][ptl] PTL Candidacy for Kuryr - Stein Message-ID: Dear all, I'd like to announce my candidacy for Kuryr's PTL for the Stein cycle. In case you don't know me, I was fortunate to work as PTL on Kuryr and its related projects for the Rocky cycle where I'm happy to say that we've achieved most of the milestones we set. I would be honoured to continue doing this for the next six months. During Stein, I would like to focus on some of these topics. We've also started efforts on Rocky which I'd like to lead to completion. * Network Policy Support: This feature maps K8s network policies into Neutron segurity groups and it's something I'd personally like to lead to completion. * Neutron pooling resource speedups: Tied closesly with the previous feature, it'll be needed as a way to further improve the speed Neutron handles its resources. * Operator Support * Octavia providers: Native OVN Layer 4 load balancing for services Amphora provider for Routes * Native router supports via Octavia * Multi device/net support Also, I'd like to coordinate finishing some features that might not be making it for the Rocky cycle, such as SRIOV and DPDK support, and adopt the usage of CRDs within the project. Outside of this key areas, my priority is also helping the community by acting as an interface for the cross-project sessions and further improve our presence in initiatives such as Openlab, OPNFV and so. Thanks a lot! Daniel Mellado (dmellado) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Tue Jul 24 15:57:53 2018 From: dmellado at redhat.com (Daniel Mellado Area) Date: Tue, 24 Jul 2018 17:57:53 +0200 Subject: [openstack-dev] [kuryr] PTL on vacation Message-ID: Hi all, I'll be on vacation until July 31st, without easy access to email and computer. During that time Antoni Segura Puimedon (apuimedo) will be acting as my deputy (thanks in advance!) Best! Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoung at redhat.com Tue Jul 24 16:18:43 2018 From: myoung at redhat.com (Matt Young) Date: Tue, 24 Jul 2018 12:18:43 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: References: Message-ID: I've captured this as a point of discussion for the TripleO CI Team's planning session(s). Matt On Tue, Jul 24, 2018 at 4:59 AM Bogdan Dobrelya wrote: > > On 7/23/18 9:33 PM, Emilien Macchi wrote: > > But it seems like, starting with Ansible 2.5 (what we already have in > > Rocky and beyond), we should encourage the usage of ansible_facts > > dictionary. > > Example: > > var=hostvars[inventory_hostname].ansible_facts.hostname > > instead of: > > var=ansible_hostname > > If that means rewriting all ansible_foo things around the globe, we'd > have a huge scope for changes. Those are used literally everywhere. Here > is only a search for tripleo-quickstart [0] > > [0] > http://codesearch.openstack.org/?q=%5B%5C.%27%22%5Dansible_%5CS%2B%5B%5E%3A%5D&i=nope&files=roles&repos=tripleo-quickstart > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From alifshit at redhat.com Tue Jul 24 16:23:44 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 24 Jul 2018 12:23:44 -0400 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI Message-ID: Hey all, tl;dr Humbly requesting a handful of nodes to run NFV tests in CI Intel has their NFV tests tempest plugin [1] and manages a third party CI for Nova. Two of the cores on that project (Stephen Finucane and Sean Mooney) have now moved to Red Hat, but the point still stands that there's a need and a use case for testing things like NUMA topologies, CPU pinning and hugepages. At Red Hat, we also have a similar tempest plugin project [2] that we use for downstream whitebox testing. The scope is a bit bigger than just NFV, but the main use case is still testing NFV code in an automated way. Given that there's a clear need for this sort of whitebox testing, I would like to humbly request a handful of nodes (in the 3 to 5 range) from infra to run an "official" Nova NFV CI. The code doing the testing would initially be the current Intel plugin, bug we could have a separate discussion about keeping "Intel" in the name or forking and/or renaming it to something more vendor-neutral. I won't be at PTG (conflict with personal travel), so I'm kindly asking Stephen and Sean to represent this idea in Denver. Cheers! [1] https://github.com/openstack/intel-nfv-ci-tests [2] https://review.rdoproject.org/r/#/admin/projects/openstack/whitebox-tempest-plugin From strigazi at gmail.com Tue Jul 24 16:27:36 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 24 Jul 2018 18:27:36 +0200 Subject: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC In-Reply-To: <1cdd9614-fef3-df33-f6c9-66d9d1764e5e@catalyst.net.nz> References: <1cdd9614-fef3-df33-f6c9-66d9d1764e5e@catalyst.net.nz> Message-ID: Hello list, After trial and error this is the new layout of the magnum meetings plus office hours. 1. The meeting moves to Tuesdays 2100 UTC starting today 2.1 Office hours for strigazi Tuesdays: 1300 to 1400 UTC 2.2 Office hours for flwang Wednesdays : 2200 to 2300 UTC Cheers, Spyros [0] https://wiki.openstack.org/wiki/Meetings/Containers On Tue, 26 Jun 2018 at 04:46, Fei Long Wang wrote: > Hi Spyros, > > Thanks for posting the discussion output. I'm not sure I can follow the > idea of simplifying CNI configuration. Though we have both calico and > flannel for k8s, but if we put both of them into single one config script. > The script could be very complex. That's why I think we should define some > naming and logging rules/policies for those scripts for long term > maintenance to make our life easier. Thoughts? > > On 25/06/18 19:20, Spyros Trigazis wrote: > > Hello again, > > After Thursday's meeting I want to summarize what we discussed and add > some pointers. > > > - Work on using the out-of-tree cloud provider and move to the new > model of defining it > https://storyboard.openstack.org/#!/story/1762743 > https://review.openstack.org/#/c/577477/ > - Configure kubelet and kube-proxy on master nodes > This story of the master node label can be extened > https://storyboard.openstack.org/#!/story/2002618 > or we can add a new one > - Simplify CNI configuration, we have calico and flannel. Ideally we > should a single config script for each > one. We could move flannel to the kubernetes hosted version that uses > kubernetes objects for storage. > (it is the recommended way by flannel and how it is done with kubeadm) > - magum support in gophercloud > https://github.com/gophercloud/gophercloud/issues/1003 > - *needs discussion *update version of heat templates (pike or queens) > This need its own tread > - Post deployment scripts for clusters, I have this since some time > for my but doing it in > heat is slightly (not a lot) complicated. Most magnum users favor the > simpler solution > of passing a url of a manifest or script to the cluster (at least > let's add sha512sum). > - Simplify addition of custom labels/parameters. To avoid patcing > magnum, it would be > more ops friendly to have a generic field of custom parameters > > Not discussed in the last meeting but we should in the next ones: > > - Allow cluster scaling from different users in the same project > https://storyboard.openstack.org/#!/story/2002648 > - Add the option to remove node from a resource group for swarm > clusters like > in kubernetes > https://storyboard.openstack.org/#!/story/2002677 > > Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday > 1700UTC. > > You can always consult this page [1] for future meetings. > > Cheers, > Spyros > > [1] https://wiki.openstack.org/wiki/Meetings/Containers > > On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis wrote: > >> Hello list, >> >> We are going to have a second weekly meeting for magnum for 3 weeks >> as a test to reach out to contributors in the Americas. >> >> You can join us tomorrow (or today for some?) at 1700UTC in >> #openstack-containers . >> >> Cheers, >> Spyros >> >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jul 24 16:30:00 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 24 Jul 2018 09:30:00 -0700 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI In-Reply-To: References: Message-ID: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote: > Hey all, > > tl;dr Humbly requesting a handful of nodes to run NFV tests in CI > > Intel has their NFV tests tempest plugin [1] and manages a third party > CI for Nova. Two of the cores on that project (Stephen Finucane and > Sean Mooney) have now moved to Red Hat, but the point still stands > that there's a need and a use case for testing things like NUMA > topologies, CPU pinning and hugepages. > > At Red Hat, we also have a similar tempest plugin project [2] that we > use for downstream whitebox testing. The scope is a bit bigger than > just NFV, but the main use case is still testing NFV code in an > automated way. > > Given that there's a clear need for this sort of whitebox testing, I > would like to humbly request a handful of nodes (in the 3 to 5 range) > from infra to run an "official" Nova NFV CI. The code doing the > testing would initially be the current Intel plugin, bug we could have > a separate discussion about keeping "Intel" in the name or forking > and/or renaming it to something more vendor-neutral. The way you request nodes from Infra is through your Zuul configuration. Add jobs to a project to run tests on the node labels that you want. I'm guessing this process doesn't work for NFV tests because you have specific hardware requirements that are not met by our current VM resources? If that is the case it would probably be best to start by documenting what is required and where the existing VM resources fall short. In general though we operate on top of donated cloud resources, and if those do not work we will have to identify a source of resources that would work. > > I won't be at PTG (conflict with personal travel), so I'm kindly > asking Stephen and Sean to represent this idea in Denver. > > Cheers! > > [1] https://github.com/openstack/intel-nfv-ci-tests > [2] > https://review.rdoproject.org/r/#/admin/projects/openstack/whitebox-tempest-plugin From alifshit at redhat.com Tue Jul 24 17:21:49 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 24 Jul 2018 13:21:49 -0400 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI In-Reply-To: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> References: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> Message-ID: On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan wrote: > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote: >> Hey all, >> >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI >> >> Intel has their NFV tests tempest plugin [1] and manages a third party >> CI for Nova. Two of the cores on that project (Stephen Finucane and >> Sean Mooney) have now moved to Red Hat, but the point still stands >> that there's a need and a use case for testing things like NUMA >> topologies, CPU pinning and hugepages. >> >> At Red Hat, we also have a similar tempest plugin project [2] that we >> use for downstream whitebox testing. The scope is a bit bigger than >> just NFV, but the main use case is still testing NFV code in an >> automated way. >> >> Given that there's a clear need for this sort of whitebox testing, I >> would like to humbly request a handful of nodes (in the 3 to 5 range) >> from infra to run an "official" Nova NFV CI. The code doing the >> testing would initially be the current Intel plugin, bug we could have >> a separate discussion about keeping "Intel" in the name or forking >> and/or renaming it to something more vendor-neutral. > > The way you request nodes from Infra is through your Zuul configuration. Add jobs to a project to run tests on the node labels that you want. Aha, thanks, I'll look into that. I was coming from a place of complete ignorance about infra. > > I'm guessing this process doesn't work for NFV tests because you have specific hardware requirements that are not met by our current VM resources? > If that is the case it would probably be best to start by documenting what is required and where the existing VM resources fall > short. Well, it should be possible to do most of what we'd like with nested virt and virtual NUMA topologies, though things like hugepages will need host configuration, specifically the kernel boot command [1]. Is that possible with the nodes we have? > In general though we operate on top of donated cloud resources, and if those do not work we will have to identify a source of resources that would work. Right, as always it comes down to resources and money. I believe historically Red Hat has been opposed to running an upstream third party CI (this is by no means an official Red Hat position, just remembering what I think I heard), but I can always see what I can do. [1] https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host From cboylan at sapwetik.org Tue Jul 24 18:47:18 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 24 Jul 2018 11:47:18 -0700 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI In-Reply-To: References: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> Message-ID: <1532458038.3552690.1451538240.310285D1@webmail.messagingengine.com> On Tue, Jul 24, 2018, at 10:21 AM, Artom Lifshitz wrote: > On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan wrote: > > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote: > >> Hey all, > >> > >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI > >> > >> Intel has their NFV tests tempest plugin [1] and manages a third party > >> CI for Nova. Two of the cores on that project (Stephen Finucane and > >> Sean Mooney) have now moved to Red Hat, but the point still stands > >> that there's a need and a use case for testing things like NUMA > >> topologies, CPU pinning and hugepages. > >> > >> At Red Hat, we also have a similar tempest plugin project [2] that we > >> use for downstream whitebox testing. The scope is a bit bigger than > >> just NFV, but the main use case is still testing NFV code in an > >> automated way. > >> > >> Given that there's a clear need for this sort of whitebox testing, I > >> would like to humbly request a handful of nodes (in the 3 to 5 range) > >> from infra to run an "official" Nova NFV CI. The code doing the > >> testing would initially be the current Intel plugin, bug we could have > >> a separate discussion about keeping "Intel" in the name or forking > >> and/or renaming it to something more vendor-neutral. > > > > The way you request nodes from Infra is through your Zuul configuration. Add jobs to a project to run tests on the node labels that you want. > > Aha, thanks, I'll look into that. I was coming from a place of > complete ignorance about infra. > > > > I'm guessing this process doesn't work for NFV tests because you have specific hardware requirements that are not met by our current VM resources? > > If that is the case it would probably be best to start by documenting what is required and where the existing VM resources fall > > short. > > Well, it should be possible to do most of what we'd like with nested > virt and virtual NUMA topologies, though things like hugepages will > need host configuration, specifically the kernel boot command [1]. Is > that possible with the nodes we have? https://docs.openstack.org/infra/manual/testing.html attempts to give you an idea for what is currently available via the test environments. Nested virt has historically been painful because not all clouds support it and those that do did not do so in a reliable way (VMs and possibly hypervisors would crash). This has gotten better recently as nested virt is something more people have an interest in getting working but it is still hit and miss particularly as you use newer kernels in guests. I think if we can continue to work together with our clouds (thank you limestone, OVH, and vexxhost!) we may be able to work out nested virt that is redundant across multiple clouds. We will likely need individuals willing to keep caring for that though and debug problems when the next release of your favorite distro shows up. Can you get by with qemu or is nested virt required? As for hugepages, I've done a quick survey of cpuinfo across our clouds and all seem to have pse available but not all have pdpe1gb available. Are you using 1GB hugepages? Keep in mind that the test VMs only have 8GB of memory total. As for booting with special kernel parameters you can have your job make those modifications to the test environment then reboot the test environment within the job. There is some Zuul specific housekeeping that needs to be done post reboot, we can figure that out if we decide to go down this route. Would your setup work with 2M hugepages? > > > In general though we operate on top of donated cloud resources, and if those do not work we will have to identify a source of resources that would work. > > Right, as always it comes down to resources and money. I believe > historically Red Hat has been opposed to running an upstream third > party CI (this is by no means an official Red Hat position, just > remembering what I think I heard), but I can always see what I can do. > > [1] > https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host From chris.friesen at windriver.com Tue Jul 24 19:06:22 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 24 Jul 2018 13:06:22 -0600 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI In-Reply-To: <1532458038.3552690.1451538240.310285D1@webmail.messagingengine.com> References: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> <1532458038.3552690.1451538240.310285D1@webmail.messagingengine.com> Message-ID: <5B5778AE.2060201@windriver.com> On 07/24/2018 12:47 PM, Clark Boylan wrote: > Can you get by with qemu or is nested virt required? Pretty sure that nested virt is needed in order to test CPU pinning. > As for hugepages, I've done a quick survey of cpuinfo across our clouds and all seem to have pse available but not all have pdpe1gb available. Are you using 1GB hugepages? If we want to test nova's handling of 1G hugepages then I think we'd need pdpe1gb. Chris From bodenvmw at gmail.com Tue Jul 24 19:10:28 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Tue, 24 Jul 2018 13:10:28 -0600 Subject: [openstack-dev] [neutron] Please use neutron-lib 1.18.0 for Rocky In-Reply-To: <73501776-0985-41A2-9D07-895E94EA2ACD@opennetworking.org> References: <5321591d-9b81-062e-319b-0aa674402198@gmail.com> <73501776-0985-41A2-9D07-895E94EA2ACD@opennetworking.org> Message-ID: <72a5b0a6-2b32-75db-8b18-de8ff6a3d7a2@gmail.com> On 7/23/18 9:46 PM, Sangho Shin wrote: > It applies also to the networking-xxxx projects. Right? Yes. It should apply to any project that's using/depending-on neutron/master today. Note that I "think" the neutron-lib version required by neutron will trump the project's required version anyway, but it would be ideal if all such projects required the same/proper version. From mriedemos at gmail.com Tue Jul 24 20:15:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Jul 2018 15:15:53 -0500 Subject: [openstack-dev] Lots of slow tests timing out jobs Message-ID: While going through our uncategorized gate failures [1] I found that we have a lot of jobs failing (161 in 7 days) due to the tempest run timing out [2]. I originally thought it was just the networking scenario tests, but I was able to identify a handful of API tests that are also taking nearly 3 minutes each, which seems like they should be moved to scenario tests and/or marked slow so they can be run in a dedicated tempest-slow job. I'm not sure how to get the history on the longest-running tests on average to determine where to start drilling down on the worst offenders, but it seems like an audit is in order. [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html [2] https://bugs.launchpad.net/tempest/+bug/1783405 -- Thanks, Matt From emilien at redhat.com Tue Jul 24 21:27:35 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 24 Jul 2018 17:27:35 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 26th Edition Message-ID: Welcome to the twenty-sixth edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132301.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky Milestone 3 is this week. The team should focus on stabilization, bug fixing, testing, so we can make our rocky release more awesome. +--> Reminder about PTG etherpad, feel free to propose topics: https://etherpad.openstack.org/p/tripleo-ptg-stein +--> PTL elections are open! If you want to be the next TripleO PTL, it's the right time to send your candidacy *now* ! +------------------------------+ | Continuous Integration | +------------------------------+ +--> Sprint theme: migration to Zuul v3 (More on https://trello.com/c/vyWXcKOB/841-sprint-16-goals) +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI issue. +--> Promotion on master is 4 days, 0 days on Queens, 2 days on Pike and 0 day on Ocata. +--> RDO Third Party jobs are currently down: https://tree.taiga.io/project/morucci-software-factory/issue/1560 +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Progress on work for updates/upgrades with external installers: https://review.openstack.org/#/q/status:open+branch:master+topic:external-update-upgrade +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Lot of testing around containerized undercloud, please let us know any problem. +--> Image prepare via workflow is still work in progress. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> UI integration needs review. +--> Bug with failure listing is in progress: https://bugs.launchpad.net/tripleo/+bug/1779093 +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Major Network Configuration patches landed! Congrats team! +--> Config-download patches are being reviewed and a lot of testing is going on. +--> The team is working on a Tempest Plugin for TripleO UI: https://review.openstack.org/#/c/575730/ +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Working on Secrets management. +--> Last meeting notes: http://eavesdrop.openstack.org/meetings/security_squad/2018/security_squad.2018-07-18-12.07.html +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls feed the strongest babies first. As harsh as it sounds, the parents always feed the oldest and strongest owlet before its sibling. This means that if food is scarce, the youngest chicks will starve. After an owlet leaves the nest, it often lives nearby in the same tree, and its parents still bring it food. If it can survive the first winter on its own, its chances of survival are good. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From persia at shipstone.jp Tue Jul 24 23:54:51 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Wed, 25 Jul 2018 08:54:51 +0900 Subject: [openstack-dev] [all][election] Nominations for OpenStack PTLs (Project Team Leads) are now open Message-ID: <20180724235451.GA8279@shipstone.jp> Nominations for OpenStack PTLs (Program Team Leads) are now open and will remain open until July 31st, 2018 23:45 UTC. This term is expected to be slightly longer than usual, as the release cycle is expected to adjust to match the Summit schedule. All nominations must be submitted as a text file to the openstack/election repository as explained at http://governance.openstack.org/election/#how-to-submit-your-candidacy Please make sure to follow the new candidacy file naming convention: $cycle_name/$project_name/$ircname.txt. In order to be an eligible candidate (and be allowed to vote) in a given PTL election, you need to have contributed to the corresponding team[0] during the Queens-Rocky timeframe (February 5th, 2018 00:00 UTC to July 24th, 2018 00:00 UTC). You must also be an OpenStack Foundation Individual Member in good standing. To check if your membership http://governance.openstack.org/election/#how-to-submit-your-candidacy Additional information about the nomination process can be found here: https://governance.openstack.org/election/ Shortly after election officials approve candidates, they will be listed here: https://governance.openstack.org/election/#stein-ptl-candidates The electorate is requested to confirm their email address in gerrit[1], prior to July 24th, 2018 midnight UTC so that the emailed ballots are mailed to the correct email address. This email address should match that which was provided in your foundation member profile[2] as well. Happy running, [0] https://governance.openstack.org/tc/reference/projects/ [1] https://review.openstack.org/#/settings/contact [2] https://www.openstack.org/profile/ -- Emmet HIKORY From fungi at yuggoth.org Wed Jul 25 01:26:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Jul 2018 01:26:56 +0000 Subject: [openstack-dev] [all][election] Nominations for OpenStack PTLs (Project Team Leads) are now open In-Reply-To: <20180724235451.GA8279@shipstone.jp> References: <20180724235451.GA8279@shipstone.jp> Message-ID: <20180725012655.syvadejfcvgbxviv@yuggoth.org> On 2018-07-25 08:54:51 +0900 (+0900), Emmet Hikory wrote: [...] > All nominations must be submitted as a text file to the openstack/election > repository as explained at > http://governance.openstack.org/election/#how-to-submit-your-candidacy > > Please make sure to follow the new candidacy file naming convention: > $cycle_name/$project_name/$ircname.txt. [...] The directions on the Web page are correct, but it looks like we need to update our E-mail template to reflect last cycle's change from $ircname to $email_address instead. Just to be clear, the candidacy filename should be an E-mail address you use both with your Gerrit account and your OpenStack Foundation Individual Member profile since we'll use it both to confirm you have a qualifying change merged to a relevant deliverable repository and that you have an active foundation membership. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fm577c at att.com Wed Jul 25 01:27:26 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Wed, 25 Jul 2018 01:27:26 +0000 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins In-Reply-To: <164ca27003f.b18a531780446.7064526011429840442@ghanshyammann.com> References: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> <164ca27003f.b18a531780446.7064526011429840442@ghanshyammann.com> Message-ID: <7D5E803080EF7047850D309B333CB94E22E45C16@GAALPA1MSGUSRBI.ITServices.sbc.com> Please see comments inline. > ---- On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C > wrote ---- > > Hi, > > > > ** Intention ** > > Intention is to expand Patrole testing to some service clients that already > exist in some Tempest plugins, for core services only. > > > > ** Background ** > > Digging through Neutron testing, it seems like there is currently a lot of > test duplication between neutron-tempest-plugin and Tempest [1]. Under > some circumstances it seems OK to have redundant testing/parallel testing: > “Having potential duplication between testing is not a big deal especially > compared to the alternative of removing something which is actually > providing value and is actively catching bugs, or blocking incorrect patches > from landing” [2]. > > We really need to minimize the test duplication. If there is test in tempest > plugin for core services then, we do not need to add those in Tempest repo > until it is interop requirement. This is for new tests so we can avoid the > duplication in future. I will write this in Tempest reviewer guide. > For existing duplicate tests, as per bug you mentioned[1] we need to cleanup > the duplicate tests and they should live in their respective repo(either in > neutron tempest plugin or tempest) which is categorized in etherpad[7]. How > many tests are duplicated now? I will plan this as one of cleanup working > item in stein. > > > > > This leads me to the following question: If API test duplication is OK, what > about service client duplication? Patches like [3] and [4] promote service > client duplication with neutron-tempest-plugin. As far as I can tell, Neutron > builds out some of its service clients dynamically here: [5]. Which includes > segments service client (proposed as an addition to tempest.lib in [4]) here: > [6]. > > Yeah, they are very dynamic in neutron plugins and its because of old legacy > code. That is because when neutron tempest plugin was forked from > Tempest as it is. These dynamic generation of service clients are really hard > to debug and maintain. This can easily lead to backward incompatible > changes if we make those service clients stable interface to consume > outside. For those reason, we did fixed those in Tempest 3 years back [8] and > made them static and consistent service client methods like other services > clients. > > > > > This leads to a situation where if we want to offer RBAC testing for these > APIs (to validate their policy enforcement), we can’t really do so without > adding the service client to Tempest, unless we rely on the neutron-tempest- > plugin (for example) in Patrole’s .zuul.yaml. > > > > ** Path Forward ** > > Option #1: For the core services, most service clients should live in > tempest.lib for standardization/governance around documentation and > stability for those clients. Service client duplication should try to be > minimized as much as possible. API testing related to some service clients, > though, should remain in the Tempest plugins. > > > > Option #2: Proceed with service client duplication, either by adding the > service client to Tempest (or as yet another alternative, Patrole). This leads > to maintenance overhead: have to maintain service clients in the plugins and > Tempest itself. > > > > Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs. > > We need to share the service clients among Tempest plugins. And each > service clients which are being shared across repo has to be declared as > stable interface like Tempest does. Idea here is service clients will live in the > repo where their original tests were added or going to be added. For > example in case of neutron tempest plugin, if rbac-policy API tests are in > neutron then its service client needs to be owned by neutron-tempest-plugin. > further rbac-policy service client can be consumed by Patrole. It is same case > for congress tempest plugin, where they consume mistral service client. I > recommended the same in that thread also of using service client from > Mistral and Mistral make the service client as stable interface [9]. Which is > being done in congress[10] > > Here are the general recommendation for Tempest Plugins for service clients > : > - Tempest Plugins should make their service clients as stable interface which > gives 2 advantage: In this case we should also expand the Tempest plugin stable interface documentation here (which currently gives people a narrow understanding of what stable interface means) to include stable interfaces in other plugins: https://docs.openstack.org/tempest/latest/plugin.html#stable-tempest-apis-plugins-may-use > 1. By this you make sure that you are not allowing to change the API calling > interface(service clietns) which indirectly means you are not allowing to > change the APIs. Makes your tempest plugin testing more reliable. > > 2. Your service clients can be used in other Tempest plugins to avoid > duplicate code/interface. If any other plugins use you service clients means, > they also test your project so it is good to help them by providing the > required interface as stable. > > Initial idea of owning the service clients in their respective plugins was to > share them among plugins for integrated testing of more then one openstack > service. Thanks, this is good to know. > > - Usage of service clients across repo, Tempest provide a better way to do so > than importing them directly [11]. You can see the example for Manila's > tempest plugin [12]. This gives an advantage of discovering your registered > service clients in other Tempest plugins automatically. > > I think its wroth to write as Doc in Tempest for Recommendation to Tempest > Plugins. I will write one later this week. > > Now back to current question of Patrole, Let's check with neutron tempest > plugin team about implementing the above recommendation and use the > service client from there instead of duplicating it in Tempest. We should > consume the service clients from neutron plugin and tempest where ever > they live. > > How about below plan: > Step 1. Neutron tempest plugin team declaring service client as stable > interface which means no backward incompatible change. Ok, so it seems like this is just something that is agreed to in IRC and then formalized using a documentation update saying they will commit to a stable interface in their plugin. I am wondering whether there is/should be a governance tag for saying whether a plugin abides by Tempest's stable interface guidelines. > Step 2. Patrole import those service clients from neutron plugin as of now > and proceed with testing. > Step 3. Later neutron tempest plugin expose service clients via service client > registration so that their service clients can be discovered automatically than > importing them. Same way Tempest does. I will work with Neutron team to see about moving some of their legacy code to stable interface if they agree to this. - Felipe > > > [7] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__etherpad.openstack.org_p_neutron-2Dtempest- > 2Ddefork&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=errkCNHIciPwWe-fA2xZ1yN0VisE-YIwV-cpZv-0PKI&e= > [8] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_q_status-3Amerged-2Bproject- > 3Aopenstack_tempest-2Bbranch-3Amaster-2Btopic-3Arefactor-5Fneutron- > 5Fclient&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=G47gpRDWJUpv-IRZJXTPwn7DzrmIm7XKma_BozhVMOc&e= > [9] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_pipermail_openstack-2Ddev_2018- > 2DMarch_128483.html&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=xuspIDlgBB1uj9BZP_vfNp8KEdzHd_iy1VvBpGe_szM&e= > [10] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__github.com_openstack_congress-2Dtempest- > 2Dplugin_blob_master_congress-5Ftempest- > 5Fplugin_tests_scenario_manager-5Fcongress.py- > 23L85&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=cuEOpCNTzf3TRDkSQjIKkL6cGq6seYBb0ETpSmIM5dM&e= > [11] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_tempest_latest_plugin.html-23get-5Fservice- > 5Fclients-28-29&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=4JdQX-W0qSV-T2D99zl9gO6mfZ8KQw8-7GtAq59b9DE&e= > [12] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_334596_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=ZTAYLqZ9s4T42S9jUk5bBewSEzGhsDrm76TfVTItGdI&e= > > -gmann > > > > > Thanks, > > > > Felipe > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__bugs.launchpad.net_neutron_-2Bbug_1552960&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=6LxRtrN9LZhGRBXF590Bs6C1wCih14Y6JfM_76Ns30E&e= > > [2] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_tempest_latest_test- > 5Fremoval.html&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=qTEukP09Fe68swHryH3J6xCYX0ThWfinmcLKi-SDVLk&e= > > [3] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_482395_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=dk1mrXrEQgzUFRx4OvToUouAYN5oi2pqsO6JrwSCMc8&e= > > [4] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_582340_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=gnRz4yfHw5DQywEUFkJaIbzTjRKjFMpDwt0wJ8KDGPU&e= > > [5] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_neutron-2Dtempest- > 2Dplugin_tree_neutron-5Ftempest- > 5Fplugin_services_network_json_network-5Fclient.py&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=9uIB2Sj4ia_NxOlVSwBM9aZizpfbCycgS6vN9CpItDM&e= > > [6] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_neutron-2Dtempest- > 2Dplugin_tree_neutron-5Ftempest-5Fplugin_api_test- > 5Ftimestamp.py&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=K2tI5uH8MWQWc73ha7FcZhPZn4yavgstc8kQng6SwRY&e= > > > > > > > ______________________________________________________________ > ____________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=7qFHngtdEnU2hibbKqDY2DRGMQOIfoaSjYJ0Xrei_tw&e= > > > > > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > A&s=7qFHngtdEnU2hibbKqDY2DRGMQOIfoaSjYJ0Xrei_tw&e= From gmann at ghanshyammann.com Wed Jul 25 02:14:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 Jul 2018 11:14:04 +0900 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins In-Reply-To: <7D5E803080EF7047850D309B333CB94E22E45C16@GAALPA1MSGUSRBI.ITServices.sbc.com> References: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> <164ca27003f.b18a531780446.7064526011429840442@ghanshyammann.com> <7D5E803080EF7047850D309B333CB94E22E45C16@GAALPA1MSGUSRBI.ITServices.sbc.com> Message-ID: <164cf36fd4f.c909e623107631.3451983704489742023@ghanshyammann.com> ---- On Wed, 25 Jul 2018 10:27:26 +0900 MONTEIRO, FELIPE C wrote ---- > Please see comments inline. > > > ---- On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C > > wrote ---- > > > Hi, > > > > > > ** Intention ** > > > Intention is to expand Patrole testing to some service clients that already > > exist in some Tempest plugins, for core services only. > > > > > > ** Background ** > > > Digging through Neutron testing, it seems like there is currently a lot of > > test duplication between neutron-tempest-plugin and Tempest [1]. Under > > some circumstances it seems OK to have redundant testing/parallel testing: > > “Having potential duplication between testing is not a big deal especially > > compared to the alternative of removing something which is actually > > providing value and is actively catching bugs, or blocking incorrect patches > > from landing” [2]. > > > > We really need to minimize the test duplication. If there is test in tempest > > plugin for core services then, we do not need to add those in Tempest repo > > until it is interop requirement. This is for new tests so we can avoid the > > duplication in future. I will write this in Tempest reviewer guide. > > For existing duplicate tests, as per bug you mentioned[1] we need to cleanup > > the duplicate tests and they should live in their respective repo(either in > > neutron tempest plugin or tempest) which is categorized in etherpad[7]. How > > many tests are duplicated now? I will plan this as one of cleanup working > > item in stein. > > > > > > > > This leads me to the following question: If API test duplication is OK, what > > about service client duplication? Patches like [3] and [4] promote service > > client duplication with neutron-tempest-plugin. As far as I can tell, Neutron > > builds out some of its service clients dynamically here: [5]. Which includes > > segments service client (proposed as an addition to tempest.lib in [4]) here: > > [6]. > > > > Yeah, they are very dynamic in neutron plugins and its because of old legacy > > code. That is because when neutron tempest plugin was forked from > > Tempest as it is. These dynamic generation of service clients are really hard > > to debug and maintain. This can easily lead to backward incompatible > > changes if we make those service clients stable interface to consume > > outside. For those reason, we did fixed those in Tempest 3 years back [8] and > > made them static and consistent service client methods like other services > > clients. > > > > > > > > This leads to a situation where if we want to offer RBAC testing for these > > APIs (to validate their policy enforcement), we can’t really do so without > > adding the service client to Tempest, unless we rely on the neutron-tempest- > > plugin (for example) in Patrole’s .zuul.yaml. > > > > > > ** Path Forward ** > > > Option #1: For the core services, most service clients should live in > > tempest.lib for standardization/governance around documentation and > > stability for those clients. Service client duplication should try to be > > minimized as much as possible. API testing related to some service clients, > > though, should remain in the Tempest plugins. > > > > > > Option #2: Proceed with service client duplication, either by adding the > > service client to Tempest (or as yet another alternative, Patrole). This leads > > to maintenance overhead: have to maintain service clients in the plugins and > > Tempest itself. > > > > > > Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs. > > > > We need to share the service clients among Tempest plugins. And each > > service clients which are being shared across repo has to be declared as > > stable interface like Tempest does. Idea here is service clients will live in the > > repo where their original tests were added or going to be added. For > > example in case of neutron tempest plugin, if rbac-policy API tests are in > > neutron then its service client needs to be owned by neutron-tempest-plugin. > > further rbac-policy service client can be consumed by Patrole. It is same case > > for congress tempest plugin, where they consume mistral service client. I > > recommended the same in that thread also of using service client from > > Mistral and Mistral make the service client as stable interface [9]. Which is > > being done in congress[10] > > > > Here are the general recommendation for Tempest Plugins for service clients > > : > > - Tempest Plugins should make their service clients as stable interface which > > gives 2 advantage: > > In this case we should also expand the Tempest plugin stable interface documentation here (which currently gives people a narrow understanding of what stable interface > means) to include stable interfaces in other plugins: https://docs.openstack.org/tempest/latest/plugin.html#stable-tempest-apis-plugins-may-use We can but giving reference to plugin doc which are actual owner of plugin stable interface in their repo. I would like to see the similar doc in plugin side also. Anyways that is good idea and let's discuss this in PTG to make it more formalized with discussion with Plugins team also and see what they think. I have added this as one of discussion item in PTG planning etherpad [1]. > > > 1. By this you make sure that you are not allowing to change the API calling > > interface(service clietns) which indirectly means you are not allowing to > > change the APIs. Makes your tempest plugin testing more reliable. > > > > 2. Your service clients can be used in other Tempest plugins to avoid > > duplicate code/interface. If any other plugins use you service clients means, > > they also test your project so it is good to help them by providing the > > required interface as stable. > > > > Initial idea of owning the service clients in their respective plugins was to > > share them among plugins for integrated testing of more then one openstack > > service. > > Thanks, this is good to know. > > > > > - Usage of service clients across repo, Tempest provide a better way to do so > > than importing them directly [11]. You can see the example for Manila's > > tempest plugin [12]. This gives an advantage of discovering your registered > > service clients in other Tempest plugins automatically. > > > > I think its wroth to write as Doc in Tempest for Recommendation to Tempest > > Plugins. I will write one later this week. > > > > Now back to current question of Patrole, Let's check with neutron tempest > > plugin team about implementing the above recommendation and use the > > service client from there instead of duplicating it in Tempest. We should > > consume the service clients from neutron plugin and tempest where ever > > they live. > > > > How about below plan: > > Step 1. Neutron tempest plugin team declaring service client as stable > > interface which means no backward incompatible change. > > Ok, so it seems like this is just something that is agreed to in IRC and then formalized using a documentation update saying they will commit to a stable interface in their plugin. I am wondering whether there is/should be a governance tag for saying whether a plugin abides by Tempest's stable interface guidelines. IRC agreement with PTL and doc are enough here. I have not thought of governance tag yet whether that is needed or not but let's discuss it if that can help. > > > Step 2. Patrole import those service clients from neutron plugin as of now > > and proceed with testing. > > Step 3. Later neutron tempest plugin expose service clients via service client > > registration so that their service clients can be discovered automatically than > > importing them. Same way Tempest does. > > I will work with Neutron team to see about moving some of their legacy code to stable interface if they agree to this. Thanks. That will be super helpful and i am sure slaweq (neutron Liaison in QA ). > > - Felipe > > > [1] https://etherpad.openstack.org/p/qa-stein-ptg -gmann > > > > [7] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__etherpad.openstack.org_p_neutron-2Dtempest- > > 2Ddefork&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=errkCNHIciPwWe-fA2xZ1yN0VisE-YIwV-cpZv-0PKI&e= > > [8] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__review.openstack.org_-23_q_status-3Amerged-2Bproject- > > 3Aopenstack_tempest-2Bbranch-3Amaster-2Btopic-3Arefactor-5Fneutron- > > 5Fclient&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=G47gpRDWJUpv-IRZJXTPwn7DzrmIm7XKma_BozhVMOc&e= > > [9] https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__lists.openstack.org_pipermail_openstack-2Ddev_2018- > > 2DMarch_128483.html&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=xuspIDlgBB1uj9BZP_vfNp8KEdzHd_iy1VvBpGe_szM&e= > > [10] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__github.com_openstack_congress-2Dtempest- > > 2Dplugin_blob_master_congress-5Ftempest- > > 5Fplugin_tests_scenario_manager-5Fcongress.py- > > 23L85&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=cuEOpCNTzf3TRDkSQjIKkL6cGq6seYBb0ETpSmIM5dM&e= > > [11] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__docs.openstack.org_tempest_latest_plugin.html-23get-5Fservice- > > 5Fclients-28-29&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=4JdQX-W0qSV-T2D99zl9gO6mfZ8KQw8-7GtAq59b9DE&e= > > [12] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__review.openstack.org_-23_c_334596_&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=ZTAYLqZ9s4T42S9jUk5bBewSEzGhsDrm76TfVTItGdI&e= > > > > -gmann > > > > > > > > Thanks, > > > > > > Felipe > > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__bugs.launchpad.net_neutron_-2Bbug_1552960&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=6LxRtrN9LZhGRBXF590Bs6C1wCih14Y6JfM_76Ns30E&e= > > > [2] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__docs.openstack.org_tempest_latest_test- > > 5Fremoval.html&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=qTEukP09Fe68swHryH3J6xCYX0ThWfinmcLKi-SDVLk&e= > > > [3] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__review.openstack.org_-23_c_482395_&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=dk1mrXrEQgzUFRx4OvToUouAYN5oi2pqsO6JrwSCMc8&e= > > > [4] https://urldefense.proofpoint.com/v2/url?u=https- > > 3A__review.openstack.org_-23_c_582340_&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=gnRz4yfHw5DQywEUFkJaIbzTjRKjFMpDwt0wJ8KDGPU&e= > > > [5] https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__git.openstack.org_cgit_openstack_neutron-2Dtempest- > > 2Dplugin_tree_neutron-5Ftempest- > > 5Fplugin_services_network_json_network-5Fclient.py&d=DwIGaQ&c=LFYZ- > > o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=9uIB2Sj4ia_NxOlVSwBM9aZizpfbCycgS6vN9CpItDM&e= > > > [6] https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__git.openstack.org_cgit_openstack_neutron-2Dtempest- > > 2Dplugin_tree_neutron-5Ftempest-5Fplugin_api_test- > > 5Ftimestamp.py&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru- > > SJ9DRnCxhze-aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=K2tI5uH8MWQWc73ha7FcZhPZn4yavgstc8kQng6SwRY&e= > > > > > > > > > > > ______________________________________________________________ > > ____________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > > https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=7qFHngtdEnU2hibbKqDY2DRGMQOIfoaSjYJ0Xrei_tw&e= > > > > > > > > > > > ______________________________________________________________ > > ____________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=X4GwEru-SJ9DRnCxhze- > > aw&m=OvA_eRSYUmntz1eBg1F-r1FDZchsK7u4OtLgezXae- > > A&s=7qFHngtdEnU2hibbKqDY2DRGMQOIfoaSjYJ0Xrei_tw&e= > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fm577c at att.com Wed Jul 25 03:16:29 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Wed, 25 Jul 2018 03:16:29 +0000 Subject: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins In-Reply-To: <58193b20-a1f7-97ed-82ef-b81de8533331@ham.ie> References: <7D5E803080EF7047850D309B333CB94E22E41449@GAALPA1MSGUSRBI.ITServices.sbc.com> <58193b20-a1f7-97ed-82ef-b81de8533331@ham.ie> Message-ID: <7D5E803080EF7047850D309B333CB94E22E45CA1@GAALPA1MSGUSRBI.ITServices.sbc.com> > > Hi, > > > > ** Intention ** > > > > Intention is to expand Patrole testing to some service clients that > > already exist in some Tempest plugins, for core services only. > > What exact projects does Patrole consider "core", and how are you making > that decision? Is it a tag, InterOp, or some other criteria? > > We mean "core" only in the sense that Tempest means it: "the six client groups for the six core services covered by tempest in the big tent" [1]. That includes Nova, Neutron, Glance, Cinder and Keystone. Swift is not included in Patrole because Swift doesn't use oslo.policy for RBAC. [1] https://specs.openstack.org/openstack/qa-specs/specs/tempest/client-manager-refactor.html From zhang.lei.fly at gmail.com Wed Jul 25 03:48:24 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 25 Jul 2018 11:48:24 +0800 Subject: [openstack-dev] [kolla] ptl non candidacy Message-ID: Hi all, I just wanna to say I am not running PTL for Stein cycle. I have been involved in Kolla project for almost 3 years. And recently my work changes a little, too. So I may not have much time in the community in the future. Kolla is a great project and the community is also awesome. I would encourage everyone in the community to consider for running. Thanks for your support :D. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jul 25 06:46:54 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 Jul 2018 15:46:54 +0900 Subject: [openstack-dev] Lots of slow tests timing out jobs In-Reply-To: References: Message-ID: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> ---- On Wed, 25 Jul 2018 05:15:53 +0900 Matt Riedemann wrote ---- > While going through our uncategorized gate failures [1] I found that we > have a lot of jobs failing (161 in 7 days) due to the tempest run timing > out [2]. I originally thought it was just the networking scenario tests, > but I was able to identify a handful of API tests that are also taking > nearly 3 minutes each, which seems like they should be moved to scenario > tests and/or marked slow so they can be run in a dedicated tempest-slow job. > > I'm not sure how to get the history on the longest-running tests on > average to determine where to start drilling down on the worst > offenders, but it seems like an audit is in order. yeah, there are many tests taking too long time. I do not know the reason this time but last time we did audit for slow tests was mainly due to ssh failure. I have created the similar ethercalc [3] to collect time taking tests and then round figure of their avg time taken since last 14 days from health dashboard. Yes, there is no calculated avg time on o-h so I did not take exact avg time its round figure. May be 14 days is too less to take decision to mark them slow but i think their avg time since 3 months will be same. should we consider 3 month time period for those ? As per avg time, I have voted (currently based on 14 days avg) on ethercalc which all test to mark as slow. I taken the criteria of >120 sec avg time. Once we have more and more people votes there we can mark them slow. [3] https://ethercalc.openstack.org/dorupfz6s9qt -gmann > > [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html > [2] https://bugs.launchpad.net/tempest/+bug/1783405 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jean-philippe at evrard.me Wed Jul 25 07:25:28 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 25 Jul 2018 09:25:28 +0200 Subject: [openstack-dev] =?utf-8?q?=5Bopenstack-ansible=5D_PTL_non-candida?= =?utf-8?q?cy?= Message-ID: <1b02-5b582600-d-5348c500@251177627> Hello everyone, If you were not at the previous OpenStack-Ansible meeting*, I'd like to inform you I will not be running for PTL of OSA. It's been a pleasure being the PTL of OSA for the last 2 cycles. We have improved in many ways: testing, stability, speed, features, documentation, user friendliness... I am glad of the work we achieved, and I think it's time for a fresh view with a new PTL. Thanks for being an awesome community. Jean-Philippe Evrard (evrardjp) *Please join! 4PM UTC in #openstack-ansible! From jean-philippe at evrard.me Wed Jul 25 07:31:31 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Wed, 25 Jul 2018 09:31:31 +0200 Subject: [openstack-dev] =?utf-8?b?Pz09P3V0Zi04P3E/ICBMb3RzIG9mIHNsb3cg?= =?utf-8?q?tests_timing_out_jobs?= In-Reply-To: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> Message-ID: <760d-5b582700-5-620baf80@144041990> On Wednesday, July 25, 2018 08:46 CEST, Ghanshyam Mann wrote: > ---- On Wed, 25 Jul 2018 05:15:53 +0900 Matt Riedemann wrote ---- > > While going through our uncategorized gate failures [1] I found that we > > have a lot of jobs failing (161 in 7 days) due to the tempest run timing > > out [2]. I originally thought it was just the networking scenario tests, > > but I was able to identify a handful of API tests that are also taking > > nearly 3 minutes each, which seems like they should be moved to scenario > > tests and/or marked slow so they can be run in a dedicated tempest-slow job. > > > > I'm not sure how to get the history on the longest-running tests on > > average to determine where to start drilling down on the worst > > offenders, but it seems like an audit is in order. > > yeah, there are many tests taking too long time. I do not know the reason this time but last time we did audit for slow tests was mainly due to ssh failure. > I have created the similar ethercalc [3] to collect time taking tests and then round figure of their avg time taken since last 14 days from health dashboard. Yes, there is no calculated avg time on o-h so I did not take exact avg time its round figure. > > May be 14 days is too less to take decision to mark them slow but i think their avg time since 3 months will be same. should we consider 3 month time period for those ? > > As per avg time, I have voted (currently based on 14 days avg) on ethercalc which all test to mark as slow. I taken the criteria of >120 sec avg time. Once we have more and more people votes there we can mark them slow. > > [3] https://ethercalc.openstack.org/dorupfz6s9qt > > -gmann > We have a similar observation in openstack-ansible. It is painful. Recently something that passed gates without rechecks (but close to timeout) took 14 (timeouts) rechecks to get in. In OSA, we will be starting a project to refactor our testing for being faster, but I'd like to have news of your research :) Thanks, Jean-Philippe (evrardjp) > > > > [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html > > [2] https://bugs.launchpad.net/tempest/+bug/1783405 > > > > -- > > > > Thanks, > > > > Matt > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From eumel at arcor.de Wed Jul 25 07:59:03 2018 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 25 Jul 2018 09:59:03 +0200 Subject: [openstack-dev] [I18n] [PTL] [Election] Candidacy for I18n PTL in Stein Cycle Message-ID: <86fe687611b5783ff59664e292908161@arcor.de> [posted & mailed] https://review.openstack.org/#/c/585663/ This is my announcement for re-candidacy as I18n PTL in Stein Cycle. At first a quick review what we've done in the last cycle: 1. Zanata upgrade done. As one of the first community's we're running Zanata Release 4 in production. A big success and a great user experiense for all. 2. Translation Check Site done. We were able to win Deutsche Telekom as sponsor for resources, so now we are able to host our own Translation Check Site. From my point of view this solves different problems which we had in the past and now we can check translation strings very fast on our requirements. 3. Aquire more people to the team. I had great experiences during the OpenStack Days in Krakow and Budapest. I shared informationen what our team is doing and how I18n works in the OpenStack Community. I've got many inspirations and hopefully some new team members :-) What I mostly like is getting things done. In this cycle we should get ready project doc translations. We started already with some projects as a proof of concept and we're still working on it. To get that around, involve more projects and involve more project team members for translations is the biggest challenge for me in this cycle. On the other hand we have Edge Computing whitepaper and Container whitepaper on our translation plan. With a new technology in use to publish the translation results very fast on the web page. Beside that we have the OpenStack Summit Berlin in that cycle. For me a special event, since I live and work in Berlin. I expect a lot of collaboration and knowledge sharing with I18n and the OpenStack Community in general. That's my plan for Stein, I'm looking forward to your vote. Frank Email: eumel at arcor.de IRC: eumel8 Twitter: eumel_8 OpenStack Profile: https://www.openstack.org/community/members/profile/45058/frank-kloeker From gmann at ghanshyammann.com Wed Jul 25 09:44:46 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 Jul 2018 18:44:46 +0900 Subject: [openstack-dev] [nova] keypair quota usage info for user Message-ID: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Hi All, During today API office hour, we were discussing about keypair quota usage bug (newton) [1]. key_pair 'in_use' quota is always 0 even when request per user which is because it is being set as 0 always [2]. >From checking the history and review discussion on [3], it seems that it was like that from staring. key_pair quota is being counted when actually creating the keypair but it is not shown in API 'in_use' field. Vishakha (assignee of this bug) is currently planing to work on this bug and before that we have few queries: 1. is it ok to show the keypair used info via API ? any original rational not to do so or it was just like that from starting. 2. Because this change will show the keypair used quota information in API's existing filed 'in_use', it is API behaviour change (not interface signature change in backward incompatible way) which can cause interop issue. Should we bump microversion for this change? [1] https://bugs.launchpad.net/nova/+bug/1644457 [2] https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de652054ac5ae1993/nova/quota.py#L189 [3] https://review.openstack.org/#/c/446239/ -gmann From pranabjyotiboruah at gmail.com Wed Jul 25 09:52:27 2018 From: pranabjyotiboruah at gmail.com (pranab boruah) Date: Wed, 25 Jul 2018 15:22:27 +0530 Subject: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication. Message-ID: Hello folks, I have filed a bug in os-vif: https://bugs.launchpad.net/os-vif/+bug/1778724 and working on a patch. Any feedback/comments from you guys would be extremely helpful. Bug details: OVS DB server has the feature of listening over a TCP socket for connections rather than just on the unix domain socket. [0] If the OVS DB server is listening over a TCP socket, then the ovs-vsctl commands should include the ovsdb_connection parameter: # ovs-vsctl --db=tcp:IP:PORT ... eg: # ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0 Neutron supports running the ovs-vsctl commands with the ovsdb_connection parameter. The ovsdb_connection parameter is configured in openvswitch_agent.ini file. [1] While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes the ovs-vsctl command. Today, there is no support to pass the ovsdb_connection parameter while invoking the ovs-vsctl command. The support should be added. This would enhance the functionality of os-vif, since it would support a scenario when OVS DB server is listening on a TCP socket connection and on functional parity with Neutron. [0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html [1] https://docs.openstack.org/neutron/pike/configuration /openvswitch-agent.html TIA, Pranab -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jul 25 09:53:48 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 25 Jul 2018 18:53:48 +0900 Subject: [openstack-dev] [nova] API updates week 19-25 Message-ID: <164d0dbe0e0.fae56c84115824.4467269741317090143@ghanshyammann.com> Hi All, Please find the Nova API highlights of this week. Weekly Office Hour: =============== What we discussed this week: - Discussion on priority BP and remaining reviews on those. - Discussed keypair quota usage bug. Planned Features : ============== Below are the API related features for Rocky cycle. Nova API Sub team will start reviewing those to give their regular feedback. If anythings missing there feel free to add those in etherpad- https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - Spec Merged - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - Weekly Progress: I did not start this due to other work. This cannot make in Rocky and will plan for Stein early. 2. Abort live migration in queued state: - https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status - https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged) - Weekly Progress: COMPLETED 3. Complex anti-affinity policies: - https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies - https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged) - Weekly Progress: COMPLETED 4. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - Weekly Progress: No progress. 5. API Extensions merge work - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky - Weekly Progress: part-1 of schema merge and part-2 of server_create merge has been merged for Rocky. 1 last patch of removing the placeholder method are on gate. part-3 of view builder merge cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed this work to Stein. 6. Handling a down cell - https://blueprints.launchpad.net/nova/+spec/handling-down-cell - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) - Weekly Progress: It is difficult to make it in Rocky? matt has open comment on patch about changing the service list along with server list in single microversion which make sense. Bugs: ==== Discussed about keypair quota bug. Sent separate mailing list for more feedback[1] This week Bug Progress: https://etherpad.openstack.org/p/nova-api-weekly-bug-report Critical: 0->0 High importance: 3->2 By Status: New: 0->0 Confirmed/Triage: 29-> 30 In-progress: 36->34 Incomplete: 4->4 ===== Total: 69->68 NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those are not in above list. Tag such bugs so that we can keep our eyes. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132459.html -gmann From mark at stackhpc.com Wed Jul 25 10:32:01 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 25 Jul 2018 11:32:01 +0100 Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: References: Message-ID: Thanks for your work as PTL during the Rocky cycle Jeffrey. Hope you are able to stay part of the community. Cheers, Mark On 25 July 2018 at 04:48, Jeffrey Zhang wrote: > Hi all, > > I just wanna to say I am not running PTL for Stein cycle. I have been > involved in Kolla project for almost 3 years. And recently my work changes > a little, too. So I may not have much time in the community in the future. Kolla > is a great project and the community is also awesome. I would encourage > everyone in the community to consider for running. > > Thanks for your support :D. > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sferdjao at redhat.com Wed Jul 25 10:50:58 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Wed, 25 Jul 2018 12:50:58 +0200 Subject: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication. In-Reply-To: References: Message-ID: <20180725105058.GA5871@redhat> On Wed, Jul 25, 2018 at 03:22:27PM +0530, pranab boruah wrote: > Hello folks, > > I have filed a bug in os-vif: > https://bugs.launchpad.net/os-vif/+bug/1778724 and > working on a patch. Any feedback/comments from you guys would be extremely > helpful. > > Bug details: > > OVS DB server has the feature of listening over a TCP socket for > connections rather than just on the unix domain socket. [0] > > If the OVS DB server is listening over a TCP socket, then the ovs-vsctl > commands should include the ovsdb_connection parameter: > # ovs-vsctl --db=tcp:IP:PORT ... > eg: > # ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0 > > Neutron supports running the ovs-vsctl commands with the ovsdb_connection > parameter. The ovsdb_connection parameter is configured in > openvswitch_agent.ini file. [1] > > While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes the > ovs-vsctl command. Today, there is no support to pass the ovsdb_connection > parameter while invoking the ovs-vsctl command. The support should be > added. This would enhance the functionality of os-vif, since it would > support a scenario when OVS DB server is listening on a TCP socket > connection and on functional parity with Neutron. > > [0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html > [1] https://docs.openstack.org/neutron/pike/configuration > /openvswitch-agent.html > TIA, > Pranab Hello Pranab, Makes sense for me. This is really related to the OVS plugin that we are maintaining. I guess you will have to add a new config option for it as we have with 'network_device_mtu' and 'ovs_vsctl_timeout'. Don't hesitate to add me as reviewer when patch is ready. Thanks, s. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Wed Jul 25 11:26:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 25 Jul 2018 19:26:12 +0800 Subject: [openstack-dev] [cyborg]Stepping down (but stick around) Message-ID: Hi Team, For those of you who has been around since the "Nomad" time, you know how a terrific journey we have come along together. A pure idea, through community discussion and development, morphed into a project who is rapidly growing and gaining industry attentions. It is my privilege to serve as Cyborg's project's PTL for two cycles and I hope for all of my inefficiency and sometimes incapability as a tech lead, I did help the project grew both in development and governance (thank you for putting up with me btw :) ). We got help from the Nova team, release team, TC, Scientific SIG and other teams constantly, and we could not be where we are now without these hand-holdings. Although we have been suffering considerably high core reviewer fade out rate, we keep have new strong core reviewers coming in. This is what makes me proud and happy the most, and also why I'm comfortable with the decision of non-candidacy in Stein. A great open source project, should do without any specific leader, and keep grow organically. This is what I have been hoping for Cyborg to achieve. Hence I want to nominate Li Liu to be a candidate of PTL for Cyborg project in Stein cycle. Li Liu has been joining Cyborg development since very early stage and contributed a lot important work: deployable db design, metadata standardization, FPGA programming support, etc. As an expert both in FPGA synthesis as well as software development for OpenStack, I think Li Liu, or Uncle Li as we nicknamed him, is the best choice we should have for S release. I would like to emphasize that this does not mean I have done with Cyborg project, on the contrary I will be spending more time to build a great ecosystem for Cyborg project. We have four target areas (AI, NFV, Edge, HPC) and it will be an even more amazing journey in front of us. Keep up the good work folks, and let's work even harder. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Wed Jul 25 12:03:38 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Wed, 25 Jul 2018 15:03:38 +0300 Subject: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle Message-ID: Hello folks! I'd like to nominate myself for the TripleO PTL role for the Stein cycle. Alex has done a great job as a PTL: The project is progressing nicely with many new, exciting features and uses for TripleO coming to fruition recently. It's a great time for the project. But, there's more work to be done. I have served the TripleO community as a core-reviewer for some years now and, more recently, by driving the Security Squad. This project has been a great learning experience for me, both technically (I got to learn even more of OpenStack) and community-wise. Now I wish to better serve the community further by bringing my experiences into PTL role. While I have not yet served as PTL for a project before,I'm eager to learn the ropes and help improve the community that has been so influential on me. For Stein, I would like to focus on: * Increasing TripleO's usage in the testing of other projects Now that TripleO can deploy a standalone OpenStack installation, I hope it can be leveraged to add value to other projects' testing efforts. I hope this would subsequentially help increase TripleO's testing coverage, and reduce the footprint required for full-deployment testing. * Technical Debt & simplification We've been working on simplifying the deployment story and battle technical depth -- let’s keep this momentum going. We've been running (mostly) fully containerized environments for a couple of releases now; I hope we can reduce the number of stacks we create, which would in turn simplify the project structure (at least on the t-h-t side). We should also aim for the most convergence we can achieve (e.g. CLI and UI workflows). * CI and testing The project has made great progress regarding CI and testing; lets keep this moving forward and get developers easier ways to bring up testing environments for them to work on and to be able to reproduce CI jobs. Thanks! Juan Antonio Osorio Robles IRC: jaosorior -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at crystone.com Wed Jul 25 12:16:16 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Wed, 25 Jul 2018 12:16:16 +0000 Subject: [openstack-dev] [puppet-openstack] [announce] puppet-openstack now has Ubuntu 18.04 Bionic support Message-ID: Hello Stackers, Would just like to give a heads-up that the puppet-openstack project as of the Rocky release will supports Ubuntu 18.04 Bionic and we are as of yesterday/today checking that in infra Zuul CI. As a step for adding this support we also introduced support for the Ceph Mimic release for the puppet-ceph module. Because of upstream packaging Ceph Mimic cannot be used on Debian 9, and should also note that Ceph Luminous cannot be used on Ubuntu 18.04 Bionic using upstream Ceph community packages (Canonical is packaging Ceph in Bionic main repo). I would like to thank everybody contributing to this effort and for everyone involved in the puppet-openstack project that has reviewed all changes. A special thanks to all the infra-people that has helped out a bunch with mirrors, Zuul and providing all necessary bits required to work on this. Best regards Tobias From moshele at mellanox.com Wed Jul 25 12:27:04 2018 From: moshele at mellanox.com (Moshe Levi) Date: Wed, 25 Jul 2018 12:27:04 +0000 Subject: [openstack-dev] testing 123 Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jul 25 13:22:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 25 Jul 2018 08:22:24 -0500 Subject: [openstack-dev] Lots of slow tests timing out jobs In-Reply-To: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> References: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> Message-ID: <5a2492fe-d3b5-f0e9-d2c6-8275c7566836@gmail.com> On 7/25/2018 1:46 AM, Ghanshyam Mann wrote: > yeah, there are many tests taking too long time. I do not know the reason this time but last time we did audit for slow tests was mainly due to ssh failure. > I have created the similar ethercalc [3] to collect time taking tests and then round figure of their avg time taken since last 14 days from health dashboard. Yes, there is no calculated avg time on o-h so I did not take exact avg time its round figure. > > May be 14 days is too less to take decision to mark them slow but i think their avg time since 3 months will be same. should we consider 3 month time period for those ? > > As per avg time, I have voted (currently based on 14 days avg) on ethercalc which all test to mark as slow. I taken the criteria of >120 sec avg time. Once we have more and more people votes there we can mark them slow. > > [3]https://ethercalc.openstack.org/dorupfz6s9qt Thanks for this. I haven't gone through all of the tests in there yet, but noticed (yesterday) a couple of them were personality file compute API tests, which I thought was strange. Do we have any idea where the time is being spent there? I assume it must be something with ssh validation to try and read injected files off the guest. I need to dig into this one a bit more because by default, file injection is disabled in the libvirt driver so I'm not even sure how these are running (or really doing anything useful). Given we have deprecated personality files in the compute API [1] I would definitely mark those as slow tests so we can still run them but don't care about them as much. [1] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52 -- Thanks, Matt From aschultz at redhat.com Wed Jul 25 13:23:25 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 25 Jul 2018 07:23:25 -0600 Subject: [openstack-dev] [tripleo] PTL non-candidacy Message-ID: Hey folks, So it's been great fun and we've accomplished much over the last two cycles but I believe it is time for me to step back and let someone else do the PTLing. I'm not going anywhere so I'll still be around to focus on the simplification and improvements that TripleO needs going forward. I look forwards to continuing our efforts with everyone. Thanks, -Alex From ed at leafe.com Wed Jul 25 13:48:12 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 25 Jul 2018 08:48:12 -0500 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil wrote: > > The branch is now available under feature/graphql on the neutron core repository [1]. I wanted to follow up with you on this effort. I haven’t seen any activity on StoryBoard for several weeks now, and wanted to be sure that there was nothing blocking you that we could help with. -- Ed Leafe From surya.seetharaman9 at gmail.com Wed Jul 25 14:53:18 2018 From: surya.seetharaman9 at gmail.com (Surya Seetharaman) Date: Wed, 25 Jul 2018 16:53:18 +0200 Subject: [openstack-dev] [nova] API updates week 19-25 In-Reply-To: <164d0dbe0e0.fae56c84115824.4467269741317090143@ghanshyammann.com> References: <164d0dbe0e0.fae56c84115824.4467269741317090143@ghanshyammann.com> Message-ID: Hi! On Wed, Jul 25, 2018 at 11:53 AM, Ghanshyam Mann wrote: > > 5. API Extensions merge work > - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky > - https://review.openstack.org/#/q/project:openstack/nova+ > branch:master+topic:bp/api-extensions-merge-rocky > - Weekly Progress: part-1 of schema merge and part-2 of server_create > merge has been merged for Rocky. 1 last patch of removing the placeholder > method are on gate. > part-3 of view builder merge > cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed > this work to Stein. > > 6. Handling a down cell > - https://blueprints.launchpad.net/nova/+spec/handling-down-cell > - https://review.openstack.org/#/q/topic:bp/handling-down- > cell+(status:open+OR+status:merged) > - Weekly Progress: It is difficult to make it in Rocky? matt has open > comment on patch about changing the service list along with server list in > single microversion which make > sense. > > ​The handling down cell spec related API changes will also be postponed to Stein since the view builder merge (part-3 of API Extensions merge work)​ is postponed to Stein. It would be more cleaner. -- Regards, Surya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Wed Jul 25 15:38:18 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Wed, 25 Jul 2018 18:38:18 +0300 Subject: [openstack-dev] [tripleo] Mistral workflow cannot establish connection In-Reply-To: References: <1CDF4A32-AFB3-44F2-94C4-339EF36AE4D2@rm.ht> <35724F88-1BEB-4587-A6E7-AE07A2C648FC@rm.ht> Message-ID: Hi Steve, You were right, when I removed most of the roles it worked. I've encountered another problem. It seems that the network-isolation.yaml I used with OSP11 is pointing to files that do not exist anymore such as * # Port assignments for the Controller role* * OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml* * OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml* * OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml* * OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml* * OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml* * OS::TripleO::Controller::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml* Have they moved to a different location or are they created during the overcloud deployment?? Thanks Samuel On Mon, Jul 16, 2018 at 3:06 PM Steven Hardy wrote: > On Sun, Jul 15, 2018 at 7:50 PM, Samuel Monderer > wrote: > > > > Hi Remo, > > > > Attached are templates I used for the deployment. They are based on a > deployment we did with OSP11. > > I made the changes for it to work with OSP13. > > > > I do think it's the roles_data.yaml file that is causing the error > because if remove the " -r $TEMPLATES_DIR/roles_data.yaml" from the > deployment script the deployment passes the point it was failing before but > fails much later because of the missing definition of the role. > > I can't see a problem with the roles_data.yaml you provided, it seems > to render ok using tripleo-heat-templates/tools/process-templates.py - > are you sure the error isn't related to uploading the roles_data file > to the swift container? > > I'd check basic CLI access to swift as a sanity check, e.g something like: > > openstack container list > > and writing the roles data e.g: > > openstack object create overcloud roles_data.yaml > > If that works OK then it may be an haproxy timeout - you are > specifying quite a lot of roles, so I wonder if something is timing > out during the plan creation phase - we had some similar issues in CI > ref https://bugs.launchpad.net/tripleo-quickstart/+bug/1638908 where > increasing the haproxy timeouts helped. > > Steve > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Jul 25 15:49:54 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 25 Jul 2018 11:49:54 -0400 Subject: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7 Message-ID: Hi All, I'm trying to add Py3 packaging support for Ubuntu Rocky and while there are a lot of issues involved with supporting Py3.7, this is one of the big ones that I could use a hand with. With py3.7, there's a deadlock when eventlet monkeypatch of stdlib thread modules is combined with use of ThreadPoolExecutor. I know this affects at least designate. The same or similar also affects heat (though I've not dug into the code the traceback after canceling tests matches that seen with designate). And it may affect other projects that I haven't touched yet. How to recreate [1]: * designate: Add a tox.ini py37 target and run designate.tests.test_workers. test_processing.TestProcessingExecutor.test_execute_multiple_tasks * heat: Add a tox.ini py37 target and run tests * general: Run bpo34173-recreate.py from issue 34173 (see below). [1] ubuntu cosmic has py3.7 In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same problem. Why would you want concurrent.futures and eventlet in same application?" I told @tomoto that I'd seek input to that question from upstream. I know there've been efforts to move away from eventlet but I just don't have the knowledge to provide a good answer to him. Here are the bugs/issues I currently have open for this: https://github.com/eventlet/eventlet/issues/508 https://bugs.launchpad.net/designate/+bug/1782647 https://bugs.python.org/issue34173 Any help with this would be greatly appreciated! Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Wed Jul 25 15:56:37 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Wed, 25 Jul 2018 18:56:37 +0300 Subject: [openstack-dev] [tripleo] network isolation can't find files referred to on director Message-ID: Hi, I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens) In my network-isolation I refer to files that do not exist anymore on the director such as OS::TripleO::Compute::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::Compute::Ports::ManagementPort: /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml Where have they gone? Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Wed Jul 25 16:09:37 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Thu, 26 Jul 2018 00:09:37 +0800 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership Message-ID: Hi All, Trove currently has a really small team, and all the active team members are from China, we had some good discussions during the Rocky online PTG meetings[1], and the goals were arranged and priorited [2][3]. But it's sad that none of us could focus on the project, and the number of patches and reviews fall a lot in this cycle comparing Queens. [1] https://etherpad.openstack.org/p/trove-ptg-rocky [2] https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking [3] https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv-SvYIrG4NLh4nWejupxqdeg/edit#gid=0 And for me, it's a really great chance to play as the PTL role of Trove, and I learned a lot during this cycle(from Trove projects to the CI infrastrues, and more). However in this cycle, I have been with no bandwith to work on the project for months, and the situation seems not be better in the forseeable future, so I think it's better to transfter the leadership, and look for opportunites for more anticipations in the project. A good news is recently a team from Samsung R&D Center in Krakow, Poland joined us, they're building a product on OpenStack, have done improvments on Trove(internally), and now interested in contributing to the community, starting by migrating the intergating tests to the tempest plugin. They're also willing and ready to act as the PTL role. The only problem for their nomination may be that none of them have a patched merged into the Trove projects. There're some in the trove-tempest-plugin waiting review, but according to the activities of the project, these patches may need a long time to merge (and we're at Rocky milestone-3, I think we could merge patches in the trove-tempest-plugin, as they're all abouth testing). I also hope and welcome the other current active team members of Trove could nominate themselves, in that way, we could get more discussions about how we think about the direction of Trove. I'll stll be here, to help the migration of the integration tests, CentOS guest images support, Cluster improvement and all other goals we discussed before, and code review. Thanks. -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Wed Jul 25 16:18:24 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Thu, 26 Jul 2018 00:18:24 +0800 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: References: Message-ID: cc to the Trove team members and guys from Samsung R&D Center in Krakow, Poland privately, so anyone of them who are not reading the ML could also be notified. On Thu, Jul 26, 2018 at 12:09 AM, 赵超 wrote: > Hi All, > > Trove currently has a really small team, and all the active team members > are from China, we had some good discussions during the Rocky online PTG > meetings[1], and the goals were arranged and priorited [2][3]. But it's sad > that none of us could focus on the project, and the number of patches and > reviews fall a lot in this cycle comparing Queens. > > [1] https://etherpad.openstack.org/p/trove-ptg-rocky > [2] https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking > [3] https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv- > SvYIrG4NLh4nWejupxqdeg/edit#gid=0 > > And for me, it's a really great chance to play as the PTL role of Trove, > and I learned a lot during this cycle(from Trove projects to the CI > infrastrues, and more). However in this cycle, I have been with no bandwith > to work on the project for months, and the situation seems not be better in > the forseeable future, so I think it's better to transfter the leadership, > and look for opportunites for more anticipations in the project. > > A good news is recently a team from Samsung R&D Center in Krakow, Poland > joined us, they're building a product on OpenStack, have done improvments > on Trove(internally), and now interested in contributing to the community, > starting by migrating the intergating tests to the tempest plugin. They're > also willing and ready to act as the PTL role. The only problem for their > nomination may be that none of them have a patched merged into the Trove > projects. There're some in the trove-tempest-plugin waiting review, but > according to the activities of the project, these patches may need a long > time to merge (and we're at Rocky milestone-3, I think we could merge > patches in the trove-tempest-plugin, as they're all abouth testing). > > I also hope and welcome the other current active team members of Trove > could nominate themselves, in that way, we could get more discussions about > how we think about the direction of Trove. > > I'll stll be here, to help the migration of the integration tests, CentOS > guest images support, Cluster improvement and all other goals we discussed > before, and code review. > > Thanks. > > -- > To be free as in freedom. > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Jul 25 16:28:48 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 25 Jul 2018 11:28:48 -0500 Subject: [openstack-dev] [release][ptl] Deadlines this week In-Reply-To: <20180723192058.GA15416@sm-workstation> References: <20180723192058.GA15416@sm-workstation> Message-ID: <20180725162848.6uxeukebz656koms@gentoo.org> On 18-07-23 14:20:59, Sean McGinnis wrote: > Just a quick reminder that this week is a big one for deadlines. > > This Thursday, July 26, is our scheduled deadline for feature freeze, soft > string freeze, client library freeze, and requirements freeze. > > String freeze is necessary to give our i18n team a chance at translating error > strings. You are highly encouraged not to accept proposed changes containing > modifications in user-facing strings (with consideration for important bug > fixes of course). Such changes should be rejected by the review team and > postponed until the next series development opens (which should happen when > RC1 is published). > > The other freezes are to allow library changes and other code churn to settle > down before we get to RC1. Import feature freeze exceptions should be requested > from the project's PTL for them to decide if the risk is low enough to allow > changes to still be accepted. > > Requirements updates will need a feature freeze exception from the requirements > team. Those should be requested by sending a request to openstack-dev with the > subject line containing "[requirements][ffe]". > > For more details, please refer to our published Rocky release schedule: > > https://releases.openstack.org/rocky/schedule.html > Final reminder, the requirements freeze starts tomorrow. I still see some projects trickling in, so this is your final warning. Starting tomorrow you will have to make a FFE request to the list first. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From edmondsw at us.ibm.com Wed Jul 25 16:29:47 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Wed, 25 Jul 2018 12:29:47 -0400 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: Ghanshyam Mann wrote on 07/25/2018 05:44:46 AM: ... snip ... > 1. is it ok to show the keypair used info via API ? any original > rational not to do so or it was just like that from starting. keypairs aren't tied to a tenant/project, so how could nova track/report a quota for them on a given tenant/project? Which is how the API is constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail > 2. Because this change will show the keypair used quota information > in API's existing filed 'in_use', it is API behaviour change (not > interface signature change in backward incompatible way) which can > cause interop issue. Should we bump microversion for this change? If we find a meaningful way to return in_use data for keypairs, then yes, I would expect a microversion bump so that callers can distinguish between a) talking to an older installation where in_use is always 0 vs. b) talking to a newer installation where in_use is 0 because there are really none in use. Or if we remove keypairs from the response, which at a glance seems to make more sense, that should also have a microversion bump so that someone who expects the old response format will still get it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From singh.surya64mnnit at gmail.com Wed Jul 25 16:30:20 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Wed, 25 Jul 2018 22:00:20 +0530 Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: References: Message-ID: Jeffrey, Great work with great leadership for Rocky Cycle. Hope to see you around always. ---spsurya On Wed, Jul 25, 2018 at 9:19 AM Jeffrey Zhang wrote: > Hi all, > > I just wanna to say I am not running PTL for Stein cycle. I have been > involved in Kolla project for almost 3 years. And recently my work changes > a little, too. So I may not have much time in the community in the future. Kolla > is a great project and the community is also awesome. I would encourage > everyone in the community to consider for running. > > Thanks for your support :D. > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Wed Jul 25 16:33:20 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 25 Jul 2018 09:33:20 -0700 Subject: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle In-Reply-To: References: Message-ID: +1 for Juan, > On Jul 25, 2018, at 5:03 AM, Juan Antonio Osorio wrote: > > Hello folks! > > I'd like to nominate myself for the TripleO PTL role for the Stein cycle. > > Alex has done a great job as a PTL: The project is progressing nicely with many > new, exciting features and uses for TripleO coming to fruition recently. It's a > great time for the project. But, there's more work to be done. > > I have served the TripleO community as a core-reviewer for some years now and, > more recently, by driving the Security Squad. This project has been a > great learning experience for me, both technically (I got to learn even more of > OpenStack) and community-wise. Now I wish to better serve the community further > by bringing my experiences into PTL role. While I have not yet served as PTL > for a project before,I'm eager to learn the ropes and help improve the > community that has been so influential on me. > > For Stein, I would like to focus on: > > * Increasing TripleO's usage in the testing of other projects > Now that TripleO can deploy a standalone OpenStack installation, I hope it > can be leveraged to add value to other projects' testing efforts. I hope this > would subsequentially help increase TripleO's testing coverage, and reduce > the footprint required for full-deployment testing. > > * Technical Debt & simplification > We've been working on simplifying the deployment story and battle technical > depth -- let’s keep this momentum going. We've been running (mostly) fully > containerized environments for a couple of releases now; I hope we can reduce > the number of stacks we create, which would in turn simplify the project > structure (at least on the t-h-t side). We should also aim for the most > convergence we can achieve (e.g. CLI and UI workflows). > > * CI and testing > The project has made great progress regarding CI and testing; lets keep this > moving forward and get developers easier ways to bring up testing > environments for them to work on and to be able to reproduce CI jobs. > > Thanks! > > Juan Antonio Osorio Robles > IRC: jaosorior > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Wed Jul 25 16:54:00 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Wed, 25 Jul 2018 09:54:00 -0700 Subject: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7 In-Reply-To: References: Message-ID: <5B58AB28.4000507@fastmail.com> Have you tried the following instead of threadpoolexecutor (which honestly should work as well, even under eventlet + eventlet patching). https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor If you have the ability to specify which executor your code is using, and you are running under eventlet I'd give preference to the green thread pool executor under that situation (and if not running under eventlet then prefer the threadpool executor variant). As for @tomoto question; honestly openstack was created before asyncio was a thing so that was a reason and assuming eventlet patching is actually working then all the existing stdlib stuff should keep on working under eventlet (including concurrent.futures); otherwise eventlet.monkey_patch isn't working and that's breaking the eventlet api. If their contract is that only certain things work when monkey patched, that's fair, but that needs to be documented somewhere (honestly it's time imho to get the hell off eventlet everywhere but that likely requires rewrites of a lot of things, oops...). -Josh Corey Bryant wrote: > Hi All, > > I'm trying to add Py3 packaging support for Ubuntu Rocky and while there > are a lot of issues involved with supporting Py3.7, this is one of the > big ones that I could use a hand with. > > With py3.7, there's a deadlock when eventlet monkeypatch of stdlib > thread modules is combined with use of ThreadPoolExecutor. I know this > affects at least designate. The same or similar also affects heat > (though I've not dug into the code the traceback after canceling tests > matches that seen with designate). And it may affect other projects that > I haven't touched yet. > > How to recreate [1]: > * designate: Add a tox.ini py37 target and run > designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks > * heat: Add a tox.ini py37 target and run tests > * general: Run bpo34173-recreate.py > from issue > 34173 (see below). > [1] ubuntu cosmic has py3.7 > > In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same > problem. Why would you want concurrent.futures and eventlet in same > application?" > > I told @tomoto that I'd seek input to that question from upstream. I > know there've been efforts to move away from eventlet but I just don't > have the knowledge to provide a good answer to him. > > Here are the bugs/issues I currently have open for this: > https://github.com/eventlet/eventlet/issues/508 > > https://bugs.launchpad.net/designate/+bug/1782647 > > https://bugs.python.org/issue34173 > > Any help with this would be greatly appreciated! > > Thanks, > Corey > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From johfulto at redhat.com Wed Jul 25 17:00:58 2018 From: johfulto at redhat.com (John Fulton) Date: Wed, 25 Jul 2018 13:00:58 -0400 Subject: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle In-Reply-To: References: Message-ID: +1 On Wed, Jul 25, 2018 at 8:04 AM Juan Antonio Osorio wrote: > > Hello folks! > > I'd like to nominate myself for the TripleO PTL role for the Stein cycle. > > Alex has done a great job as a PTL: The project is progressing nicely with many > new, exciting features and uses for TripleO coming to fruition recently. It's a > great time for the project. But, there's more work to be done. > > I have served the TripleO community as a core-reviewer for some years now and, > more recently, by driving the Security Squad. This project has been a > great learning experience for me, both technically (I got to learn even more of > OpenStack) and community-wise. Now I wish to better serve the community further > by bringing my experiences into PTL role. While I have not yet served as PTL > for a project before,I'm eager to learn the ropes and help improve the > community that has been so influential on me. > > For Stein, I would like to focus on: > > * Increasing TripleO's usage in the testing of other projects > Now that TripleO can deploy a standalone OpenStack installation, I hope it > can be leveraged to add value to other projects' testing efforts. I hope this > would subsequentially help increase TripleO's testing coverage, and reduce > the footprint required for full-deployment testing. > > * Technical Debt & simplification > We've been working on simplifying the deployment story and battle technical > depth -- let’s keep this momentum going. We've been running (mostly) fully > containerized environments for a couple of releases now; I hope we can reduce > the number of stacks we create, which would in turn simplify the project > structure (at least on the t-h-t side). We should also aim for the most > convergence we can achieve (e.g. CLI and UI workflows). > > * CI and testing > The project has made great progress regarding CI and testing; lets keep this > moving forward and get developers easier ways to bring up testing > environments for them to work on and to be able to reproduce CI jobs. > > Thanks! > > Juan Antonio Osorio Robles > IRC: jaosorior > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Wed Jul 25 17:31:01 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 25 Jul 2018 10:31:01 -0700 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: References: Message-ID: On Wed, Jul 25, 2018 at 9:24 AM Alex Schultz wrote: > Hey folks, > > So it's been great fun and we've accomplished much over the last two > cycles but I believe it is time for me to step back and let someone > else do the PTLing. I'm not going anywhere so I'll still be around to > focus on the simplification and improvements that TripleO needs going > forward. I look forwards to continuing our efforts with everyone. > > Thanks, > -Alex > Thanks for all the hard work, long hours and leadership! You have done a great job, congrats on a great cycle. Thanks > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Jul 25 17:31:57 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 25 Jul 2018 13:31:57 -0400 Subject: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7 In-Reply-To: <5B58AB28.4000507@fastmail.com> References: <5B58AB28.4000507@fastmail.com> Message-ID: Josh, Thanks for the input. GreenThreadPoolExecutor does not have the deadlock issue, so that is promising (at least with futurist 1.6.0). Does ThreadPoolExecutor have better performance than GreenThreadPoolExecutor? Curious if we could just swap out ThreadPoolExecutor for GreenThreadPoolExecutor. Thanks, Corey On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow wrote: > Have you tried the following instead of threadpoolexecutor (which honestly > should work as well, even under eventlet + eventlet patching). > > https://docs.openstack.org/futurist/latest/reference/index. > html#futurist.GreenThreadPoolExecutor > > If you have the ability to specify which executor your code is using, and > you are running under eventlet I'd give preference to the green thread pool > executor under that situation (and if not running under eventlet then > prefer the threadpool executor variant). > > As for @tomoto question; honestly openstack was created before asyncio was > a thing so that was a reason and assuming eventlet patching is actually > working then all the existing stdlib stuff should keep on working under > eventlet (including concurrent.futures); otherwise eventlet.monkey_patch > isn't working and that's breaking the eventlet api. If their contract is > that only certain things work when monkey patched, that's fair, but that > needs to be documented somewhere (honestly it's time imho to get the hell > off eventlet everywhere but that likely requires rewrites of a lot of > things, oops...). > > -Josh > > Corey Bryant wrote: > >> Hi All, >> >> I'm trying to add Py3 packaging support for Ubuntu Rocky and while there >> are a lot of issues involved with supporting Py3.7, this is one of the >> big ones that I could use a hand with. >> >> With py3.7, there's a deadlock when eventlet monkeypatch of stdlib >> thread modules is combined with use of ThreadPoolExecutor. I know this >> affects at least designate. The same or similar also affects heat >> (though I've not dug into the code the traceback after canceling tests >> matches that seen with designate). And it may affect other projects that >> I haven't touched yet. >> >> How to recreate [1]: >> * designate: Add a tox.ini py37 target and run >> designate.tests.test_workers.test_processing.TestProcessingE >> xecutor.test_execute_multiple_tasks >> * heat: Add a tox.ini py37 target and run tests >> * general: Run bpo34173-recreate.py >> from issue >> 34173 (see below). >> [1] ubuntu cosmic has py3.7 >> >> In issue 508 (see below) @tomoto asks "Eventlet and asyncio solve same >> problem. Why would you want concurrent.futures and eventlet in same >> application?" >> >> I told @tomoto that I'd seek input to that question from upstream. I >> know there've been efforts to move away from eventlet but I just don't >> have the knowledge to provide a good answer to him. >> >> Here are the bugs/issues I currently have open for this: >> https://github.com/eventlet/eventlet/issues/508 >> >> https://bugs.launchpad.net/designate/+bug/1782647 >> >> https://bugs.python.org/issue34173 >> >> Any help with this would be greatly appreciated! >> >> Thanks, >> Corey >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Wed Jul 25 17:37:56 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 25 Jul 2018 19:37:56 +0200 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: References: Message-ID: <6106ae38-8c40-bebe-0bba-b3ddd03f03ee@redhat.com> On 25/07/2018 15:23, Alex Schultz wrote: > Hey folks, > So it's been great fun and we've accomplished much over the last two > cycles but I believe it is time for me to step back and let someone > else do the PTLing. I'm not going anywhere so I'll still be around to > focus on the simplification and improvements that TripleO needs going > forward. I look forwards to continuing our efforts with everyone. > Thanks, > -Alex To me you did really a great job. I know you'll be around and so on, but let me just say thank you. -- Raoul Scarazzini rasca at redhat.com From johfulto at redhat.com Wed Jul 25 17:38:58 2018 From: johfulto at redhat.com (John Fulton) Date: Wed, 25 Jul 2018 13:38:58 -0400 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: <6106ae38-8c40-bebe-0bba-b3ddd03f03ee@redhat.com> References: <6106ae38-8c40-bebe-0bba-b3ddd03f03ee@redhat.com> Message-ID: On Wed, Jul 25, 2018 at 1:38 PM Raoul Scarazzini wrote: > > On 25/07/2018 15:23, Alex Schultz wrote: > > Hey folks, > > So it's been great fun and we've accomplished much over the last two > > cycles but I believe it is time for me to step back and let someone > > else do the PTLing. I'm not going anywhere so I'll still be around to > > focus on the simplification and improvements that TripleO needs going > > forward. I look forwards to continuing our efforts with everyone. > > Thanks, > > -Alex > > To me you did really a great job. I know you'll be around and so on, but > let me just say thank you. +1000! > > -- > Raoul Scarazzini > rasca at redhat.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Wed Jul 25 17:43:10 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 25 Jul 2018 11:43:10 -0600 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: <5B58B6AE.1@windriver.com> On 07/25/2018 10:29 AM, William M Edmonds wrote: > > Ghanshyam Mann wrote on 07/25/2018 05:44:46 AM: > ... snip ... > > 1. is it ok to show the keypair used info via API ? any original > > rational not to do so or it was just like that from starting. > > keypairs aren't tied to a tenant/project, so how could nova track/report a quota > for them on a given tenant/project? Which is how the API is constructed... note > the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail > > > 2. Because this change will show the keypair used quota information > > in API's existing filed 'in_use', it is API behaviour change (not > > interface signature change in backward incompatible way) which can > > cause interop issue. Should we bump microversion for this change? > > If we find a meaningful way to return in_use data for keypairs, then yes, I > would expect a microversion bump so that callers can distinguish between a) > talking to an older installation where in_use is always 0 vs. b) talking to a > newer installation where in_use is 0 because there are really none in use. Or if > we remove keypairs from the response, which at a glance seems to make more > sense, that should also have a microversion bump so that someone who expects the > old response format will still get it. Keypairs are weird in that they're owned by users, not projects. This is arguably wrong, since it can cause problems if a user boots an instance with their keypair and then gets removed from a project. Nova microversion 2.54 added support for modifying the keypair associated with an instance when doing a rebuild. Before that there was no clean way to do it. Chris From sdoran at redhat.com Wed Jul 25 17:46:56 2018 From: sdoran at redhat.com (Sam Doran) Date: Wed, 25 Jul 2018 13:46:56 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: References: Message-ID: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> I spoke with other Ansible Core devs to get some clarity on this change. This is not a change that is being made quickly, lightly, or without a whole of bunch of reservation. In fact, that PR created by agaffney may not be merged any time soon. He just wanted to get something started and there is still ongoing discussion on that PR. It is definitely a WIP at this point. The main reason for this change is that pretty much all of the Ansible CVEs to date came from "fact injection", meaning a fact that contains executable Python code Jinja will merrily exec(). Vars, hostvars, and facts are different in Ansible (yes, this is confusing — sorry). All vars go through a templating step. By copying facts to vars, it means facts get templated controller side which could lead to controller compromise if malicious code exists in facts. We created an AnsibleUnsafe class to protect against this, but stopping the practice of injecting facts into vars would close the door completely. It also alleviates some name collisions if you set a hostvar that has the same name as a var. We have some methods that filter out certain variables, but keeping facts and vars in separate spaces is much cleaner. This also does not change how hostvars set via set_fact are referenced. (set_fact should really be called set_host_var). Variables set with set_fact are not facts and are therefore not inside the ansible_facts dict. They are in the hostvars dict, which you can reference as {{ my_var }} or {{ hostvars['some-host']['my_var'] }} if you need to look it up from a different host. All that being said, the setting to control this behavior as Emilien pointed out is inject_facts_as_vars, which defaults to True and will remain that way for the foreseeable future. I would not rush into changing all the fact references in playbooks. It can be a gradual process. Setting inject_facts_as_vars to True means ansible_hostname becomes ansible_facts.hostname. You do not have to use the hostvars dictionary — that is for looking up facts about hosts other than the current host. If you wanted to be proactive, you could start using the ansible_facts dictionary today since it is compatible with the default setting and will not affect others trying to use playbooks that reference ansible_facts. In other words, with the default setting of True, you can use either ansible_hostname or ansible_facts.hostname. Changing it to False means only ansible_facts.hostname is defined. > Like, really. I know we can't really have a word about that kind of decision, but... damn, WHY ?! That is most certainly not the case. Ansible is developed in the open and we encourage community members to attend meetings and add topics to the agenda for discussion. Ansible also goes through a proposal process for major changes, which you can view here . You can always go to #ansible-devel on Freenode or start a discussion on the mailing list to speak with the Ansible Core devs about these things as well. --- Respectfully, Sam Doran Senior Software Engineer Ansible by Red Hat sdoran at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Jul 25 18:09:45 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 25 Jul 2018 14:09:45 -0400 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: References: Message-ID: <20180725180945.tz2sdxpdi3qu7mia@barron.net> I don't do enough in TripleO to chime in on the list, but I can't think of a more helpful PTL! Thank you for your service. On 25/07/18 10:31 -0700, Wesley Hayutin wrote: >On Wed, Jul 25, 2018 at 9:24 AM Alex Schultz wrote: > >> Hey folks, >> >> So it's been great fun and we've accomplished much over the last two >> cycles but I believe it is time for me to step back and let someone >> else do the PTLing. I'm not going anywhere so I'll still be around to >> focus on the simplification and improvements that TripleO needs going >> forward. I look forwards to continuing our efforts with everyone. >> >> Thanks, >> -Alex >> > >Thanks for all the hard work, long hours and leadership! >You have done a great job, congrats on a great cycle. > >Thanks > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >-- > >Wes Hayutin > >Associate MANAGER > >Red Hat > > > >w hayutin at redhat.com T: +1919 <+19197544114>4232509 > IRC: weshay > > >View my calendar and check my availability for meetings HERE > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From harlowja at fastmail.com Wed Jul 25 18:15:07 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Wed, 25 Jul 2018 11:15:07 -0700 Subject: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7 In-Reply-To: References: <5B58AB28.4000507@fastmail.com> Message-ID: <5B58BE2B.9070608@fastmail.com> So the only diff is that GreenThreadPoolExecutor was customized to work for eventlet (with a similar/same api as ThreadPoolExecutor); as for performance I would expect (under eventlet) that GreenThreadPoolExecutor would have better performance because it can use the native eventlet green objects (which it does try to use) instead of having to go threw the layers that ThreadPoolExecutor would have to use to achieve the same (and in this case as you found out it looks like those layers are not patched correctly in the newest ThreadPoolExecutor). Otherwise yes, under eventlet imho swap out the executor (assuming you can do this) and under threading swap in threadpool executor (ideally if done correctly the same stuff should 'just work'). Corey Bryant wrote: > Josh, > > Thanks for the input. GreenThreadPoolExecutor does not have the deadlock > issue, so that is promising (at least with futurist 1.6.0). > > Does ThreadPoolExecutor have better performance than > GreenThreadPoolExecutor? Curious if we could just swap out > ThreadPoolExecutor for GreenThreadPoolExecutor. > > Thanks, > Corey > > On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow > wrote: > > Have you tried the following instead of threadpoolexecutor (which > honestly should work as well, even under eventlet + eventlet patching). > > https://docs.openstack.org/futurist/latest/reference/index.html#futurist.GreenThreadPoolExecutor > > > If you have the ability to specify which executor your code is > using, and you are running under eventlet I'd give preference to the > green thread pool executor under that situation (and if not running > under eventlet then prefer the threadpool executor variant). > > As for @tomoto question; honestly openstack was created before > asyncio was a thing so that was a reason and assuming eventlet > patching is actually working then all the existing stdlib stuff > should keep on working under eventlet (including > concurrent.futures); otherwise eventlet.monkey_patch isn't working > and that's breaking the eventlet api. If their contract is that only > certain things work when monkey patched, that's fair, but that needs > to be documented somewhere (honestly it's time imho to get the hell > off eventlet everywhere but that likely requires rewrites of a lot > of things, oops...). > > -Josh > > Corey Bryant wrote: > > Hi All, > > I'm trying to add Py3 packaging support for Ubuntu Rocky and > while there > are a lot of issues involved with supporting Py3.7, this is one > of the > big ones that I could use a hand with. > > With py3.7, there's a deadlock when eventlet monkeypatch of stdlib > thread modules is combined with use of ThreadPoolExecutor. I > know this > affects at least designate. The same or similar also affects heat > (though I've not dug into the code the traceback after canceling > tests > matches that seen with designate). And it may affect other > projects that > I haven't touched yet. > > How to recreate [1]: > * designate: Add a tox.ini py37 target and run > designate.tests.test_workers.test_processing.TestProcessingExecutor.test_execute_multiple_tasks > * heat: Add a tox.ini py37 target and run tests > * general: Run bpo34173-recreate.py > > from issue > 34173 (see below). > [1] ubuntu cosmic has py3.7 > > In issue 508 (see below) @tomoto asks "Eventlet and asyncio > solve same > problem. Why would you want concurrent.futures and eventlet in same > application?" > > I told @tomoto that I'd seek input to that question from upstream. I > know there've been efforts to move away from eventlet but I just > don't > have the knowledge to provide a good answer to him. > > Here are the bugs/issues I currently have open for this: > https://github.com/eventlet/eventlet/issues/508 > > > > https://bugs.launchpad.net/designate/+bug/1782647 > > > > https://bugs.python.org/issue34173 > > > > > Any help with this would be greatly appreciated! > > Thanks, > Corey > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From corey.bryant at canonical.com Wed Jul 25 19:01:00 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 25 Jul 2018 15:01:00 -0400 Subject: [openstack-dev] [all] [designate] [heat] [python3] deadlock with eventlet and ThreadPoolExecutor in py3.7 In-Reply-To: <5B58BE2B.9070608@fastmail.com> References: <5B58AB28.4000507@fastmail.com> <5B58BE2B.9070608@fastmail.com> Message-ID: Ok thanks again for the input. Corey On Wed, Jul 25, 2018 at 2:15 PM, Joshua Harlow wrote: > So the only diff is that GreenThreadPoolExecutor was customized to work > for eventlet (with a similar/same api as ThreadPoolExecutor); as for > performance I would expect (under eventlet) that GreenThreadPoolExecutor > would have better performance because it can use the native eventlet green > objects (which it does try to use) instead of having to go threw the layers > that ThreadPoolExecutor would have to use to achieve the same (and in this > case as you found out it looks like those layers are not patched correctly > in the newest ThreadPoolExecutor). > > Otherwise yes, under eventlet imho swap out the executor (assuming you can > do this) and under threading swap in threadpool executor (ideally if done > correctly the same stuff should 'just work'). > > Corey Bryant wrote: > >> Josh, >> >> Thanks for the input. GreenThreadPoolExecutor does not have the deadlock >> issue, so that is promising (at least with futurist 1.6.0). >> >> Does ThreadPoolExecutor have better performance than >> GreenThreadPoolExecutor? Curious if we could just swap out >> ThreadPoolExecutor for GreenThreadPoolExecutor. >> >> Thanks, >> Corey >> >> On Wed, Jul 25, 2018 at 12:54 PM, Joshua Harlow > > wrote: >> >> Have you tried the following instead of threadpoolexecutor (which >> honestly should work as well, even under eventlet + eventlet >> patching). >> >> https://docs.openstack.org/futurist/latest/reference/index. >> html#futurist.GreenThreadPoolExecutor >> > html#futurist.GreenThreadPoolExecutor> >> >> If you have the ability to specify which executor your code is >> using, and you are running under eventlet I'd give preference to the >> green thread pool executor under that situation (and if not running >> under eventlet then prefer the threadpool executor variant). >> >> As for @tomoto question; honestly openstack was created before >> asyncio was a thing so that was a reason and assuming eventlet >> patching is actually working then all the existing stdlib stuff >> should keep on working under eventlet (including >> concurrent.futures); otherwise eventlet.monkey_patch isn't working >> and that's breaking the eventlet api. If their contract is that only >> certain things work when monkey patched, that's fair, but that needs >> to be documented somewhere (honestly it's time imho to get the hell >> off eventlet everywhere but that likely requires rewrites of a lot >> of things, oops...). >> >> -Josh >> >> Corey Bryant wrote: >> >> Hi All, >> >> I'm trying to add Py3 packaging support for Ubuntu Rocky and >> while there >> are a lot of issues involved with supporting Py3.7, this is one >> of the >> big ones that I could use a hand with. >> >> With py3.7, there's a deadlock when eventlet monkeypatch of stdlib >> thread modules is combined with use of ThreadPoolExecutor. I >> know this >> affects at least designate. The same or similar also affects heat >> (though I've not dug into the code the traceback after canceling >> tests >> matches that seen with designate). And it may affect other >> projects that >> I haven't touched yet. >> >> How to recreate [1]: >> * designate: Add a tox.ini py37 target and run >> designate.tests.test_workers.test_processing.TestProcessingE >> xecutor.test_execute_multiple_tasks >> * heat: Add a tox.ini py37 target and run tests >> * general: Run bpo34173-recreate.py >> > > from >> issue >> 34173 (see below). >> [1] ubuntu cosmic has py3.7 >> >> In issue 508 (see below) @tomoto asks "Eventlet and asyncio >> solve same >> problem. Why would you want concurrent.futures and eventlet in >> same >> application?" >> >> I told @tomoto that I'd seek input to that question from >> upstream. I >> know there've been efforts to move away from eventlet but I just >> don't >> have the knowledge to provide a good answer to him. >> >> Here are the bugs/issues I currently have open for this: >> https://github.com/eventlet/eventlet/issues/508 >> >> > > >> https://bugs.launchpad.net/designate/+bug/1782647 >> >> > > >> https://bugs.python.org/issue34173 >> >> > > >> >> Any help with this would be greatly appreciated! >> >> Thanks, >> Corey >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > subscribe> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Jul 25 19:32:34 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 25 Jul 2018 14:32:34 -0500 Subject: [openstack-dev] [tripleo] Editable environments with resource registry entries Message-ID: Hi, This came up recently on my review to add an environment to enable Designate in a TripleO deployment. It needs to set both resource registry entries and some user-configurable parameters, which means users need to make a copy of it that they can edit. However, if the file moves then the relative paths will break. The suggestion for Designate was to split the environment into one part that contains registry entries and one that contains parameters. This way the file users edit doesn't have any paths in it. So far so good. Then as I was writing docs[1] on how to use it I was reminded that we have other environments that use this pattern. In this case, specifically the ips-from-pool* (like [2]) files. I don't know if there are others. So do we need to rework all of those environments too, or is there another option? Thanks. -Ben 1: https://review.openstack.org/585833 2: https://github.com/openstack/tripleo-heat-templates/blob/master/environments/ips-from-pool.yaml From miguel at mlavalle.com Wed Jul 25 20:02:28 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 25 Jul 2018 15:02:28 -0500 Subject: [openstack-dev] [neutron] Neutron L3 sub-team meeting canceled on July 26th Message-ID: Dear Neutron Team, Tomorrow's L3 sub team meeting will be canceled. We will resume next week, on August 2nd, at 1500 UTC as normal Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Jul 25 20:07:01 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 25 Jul 2018 16:07:01 -0400 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team Message-ID: Hi everyone: This email is just to notify everyone on the TC and the community that the change to remove the stable branch maintenance as a project team[1] has been fast-tracked[2]. The change should be approved on 2018-07-28 however it is beneficial to remove the stable branch team (which has been moved into a SIG) in order for `tonyb` to be able to act as an election official. There seems to be no opposing votes however a revert is always available if any members of the TC are opposed to the change[3]. Thanks to Tony for all of his help in the elections. Regards, Mohammed [1]: https://review.openstack.org/#/c/584206/ [2]: https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates [3]: https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes From mnaser at vexxhost.com Wed Jul 25 20:40:37 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 25 Jul 2018 16:40:37 -0400 Subject: [openstack-dev] [tc] Technical Committee update for week of 23 July Message-ID: This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker Doug (who usually sends these out!) is out so we've come up with the idea of a vice-chair, which I'll be fulfilling. More information in the change listed below. We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: - Remove Stable branch maintenance as a project team https://review.openstack.org/584206 - Add ansible-role-tripleo-cookiecutter to governance https://review.openstack.org/#/c/581428/ Reference/charter changes: - Clarify new project requirements for community engagement https://review.openstack.org/#/c/567944/ - add vice chair role to the tc charter https://review.openstack.org/#/c/583947/ - designate Mohammed Naser as vice chair https://review.openstack.org/#/c/583948/ Other approved changes: - ansible-role-tripleo-zaqar had a typo which was fixed up https://review.openstack.org/#/c/583636/ - added validation for repo names (because of the above!) https://review.openstack.org/#/c/583637/ - tooling improvements in this stack: https://review.openstack.org/#/c/583953/ Office hour logs: Due to (what) seems to be a lack of consumption of the office hours logs, we're not longer logging the start and end. However, we welcome community feedback if this was something that was consumed. == Ongoing Discussions == Sean McGinnis (smcginnis) has proposed the pre-upgrade checks as the Stein goal, the document is currently being worked on with reviews already in, please chime in: - https://review.openstack.org/#/c/585491/ == TC member actions/focus/discussions for the coming week(s) == It looks like it's been a quiet past few days. However, there is a lot of discussion around how to properly decide to on-board an OpenStack project in a very specific and clear process rather than an arbitrary one at the moment. We also should continue to discuss on subjects for the upcoming PTG: - https://etherpad.openstack.org/p/tc-stein-ptg == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From james.slagle at gmail.com Wed Jul 25 21:01:22 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 25 Jul 2018 17:01:22 -0400 Subject: [openstack-dev] [tripleo] network isolation can't find files referred to on director In-Reply-To: References: Message-ID: On Wed, Jul 25, 2018 at 11:56 AM, Samuel Monderer wrote: > Hi, > > I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens) > In my network-isolation I refer to files that do not exist anymore on the > director such as > > OS::TripleO::Compute::Ports::ExternalPort: > /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml > OS::TripleO::Compute::Ports::InternalApiPort: > /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml > OS::TripleO::Compute::Ports::StoragePort: > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml > OS::TripleO::Compute::Ports::StorageMgmtPort: > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml > OS::TripleO::Compute::Ports::TenantPort: > /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml > OS::TripleO::Compute::Ports::ManagementPort: > /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml > > Where have they gone? These files are now generated from network/ports/port.network.j2.yaml during the jinja2 template rendering process. They will be created automatically during the overcloud deployment based on the enabled networks from network_data.yaml. You still need to refer to the rendered path (as shown in your example) in the various resource_registry entries. This work was done to enable full customization of the created networks used for the deployment. See: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html -- -- James Slagle -- From remo at rm.ht Wed Jul 25 21:09:04 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 25 Jul 2018 14:09:04 -0700 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: References: Message-ID: I want publically want to say THANK YOU Alex. You ROCK. Hopefully one of those summit, I will meet. Ciao, Remo > On Jul 25, 2018, at 6:23 AM, Alex Schultz wrote: > > Hey folks, > > So it's been great fun and we've accomplished much over the last two > cycles but I believe it is time for me to step back and let someone > else do the PTLing. I'm not going anywhere so I'll still be around to > focus on the simplification and improvements that TripleO needs going > forward. I look forwards to continuing our efforts with everyone. > > Thanks, > -Alex > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Jul 25 21:13:03 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Jul 2018 16:13:03 -0500 Subject: [openstack-dev] [RelMgmt][PTL][Election] Candidacy for Release Management PTL for Stein Message-ID: <20180725211302.GA25739@sm-workstation> Hello everyone! I would like to submit my name to continue as the release management PTL for the Stein release. Since I failed to recruit someone new to take over for me, I guess I'm still it. But being serious, I think I've now gotten a much deeper understanding of our release tools and process. Things with CI jobs have stabilized and we have a lot of good checks in place that help identify issues before they become problems. While Doug and Thierry are now very busy with other things that prevent them from running again, they are still around and available with a lot of great historical information and are able to help immensely with reviews, fixes, and keeping code rot at bay. I'm not saying this as a reason to have enough confidence in me to continue to run things, but for anyone that might be interested in getting involved in the Release Management team - know you would have plenty of help getting involved. I'm looking forward to helping out in whatever ways I can in Stein, and I appreciate your consideration for me to continue as PTL for the Release Management team. Sean McGinnis (smcginnis) From prometheanfire at gentoo.org Wed Jul 25 21:17:43 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 25 Jul 2018 16:17:43 -0500 Subject: [openstack-dev] [Requirements][PTL][Election] Nomination of Matthew Thode (prometheanfire) for PTL of the Requirements project Message-ID: <20180725211743.blxtsmfqnkhmjvnd@gentoo.org> I would like to announce my candidacy for PTL of the Requirements project for the Stein cycle. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. (Keep things working well) 2. Un-cap requirements where possible (stuff like eventlet). 3. Publish constraints and requirements to streamline the freeze process. https://bugs.launchpad.net/openstack-requirements/+bug/1719006 is the bug tracking the publish job. 4. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. 5. Find more cores to smooth out the review process. I look forward to continue working with you in this cycle, as your PTL or not. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sbaker at redhat.com Wed Jul 25 21:50:44 2018 From: sbaker at redhat.com (Steve Baker) Date: Thu, 26 Jul 2018 09:50:44 +1200 Subject: [openstack-dev] [tripleo] FFE request for container-prepare-workflow Message-ID: <3cedf461-3883-1150-6961-d4129ca30760@redhat.com> I'd like to request a FFE for this blueprint[1]. Theremaining changes will be tracked as Depends-On on this oooq change[2]. Initially the aim of this blueprint was to do all container prepare operations in a mistral action before the overcloud deploy. However the priority for delivery switched to helping blueprint containerized-undercloud with its container prepare. Once this was complete it was apparent that the overcloud prepare could share the undercloud prepare approach. The undercloud prepare does the following: 1) During undercloud_config, do a try-run prepare to populate the image parameters (but don't do any image transfers) 2) During tripleo-deploy, driven by tripleo-heat-templates, do the actual prepare after the undercloud registry is installed but before and containers are required For the overcloud, 1) will be done by a mistral action[3] and 2) will be done during overcloud deploy[4]. The vast majority of code for this blueprint has landed and is exercised by containerized-undercloud. I don't expect issues with the overcloud changes landing, but in the worst case scenario the overcloud prepare can be done manually by running the new command "openstack tripleo container image prepare" as documented in this change [5]. [1] https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow [2] https://review.openstack.org/#/c/573476/ [3] https://review.openstack.org/#/c/558972/ (landed but currently being reverted) [4] https://review.openstack.org/#/c/581919/ (plus the series before it) [5] https://review.openstack.org/#/c/553104/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Jul 25 21:55:19 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 25 Jul 2018 15:55:19 -0600 Subject: [openstack-dev] [tripleo] FFE request for container-prepare-workflow In-Reply-To: <3cedf461-3883-1150-6961-d4129ca30760@redhat.com> References: <3cedf461-3883-1150-6961-d4129ca30760@redhat.com> Message-ID: On Wed, Jul 25, 2018 at 3:50 PM, Steve Baker wrote: > I'd like to request a FFE for this blueprint[1]. > > The remaining changes will be tracked as Depends-On on this oooq change[2]. > > Initially the aim of this blueprint was to do all container prepare > operations in a mistral action before the overcloud deploy. However the > priority for delivery switched to helping blueprint containerized-undercloud > with its container prepare. Once this was complete it was apparent that the > overcloud prepare could share the undercloud prepare approach. > > The undercloud prepare does the following: > > 1) During undercloud_config, do a try-run prepare to populate the image > parameters (but don't do any image transfers) > > 2) During tripleo-deploy, driven by tripleo-heat-templates, do the actual > prepare after the undercloud registry is installed but before and containers > are required > > For the overcloud, 1) will be done by a mistral action[3] and 2) will be > done during overcloud deploy[4]. > > The vast majority of code for this blueprint has landed and is exercised by > containerized-undercloud. I don't expect issues with the overcloud changes > landing, but in the worst case scenario the overcloud prepare can be done > manually by running the new command "openstack tripleo container image > prepare" as documented in this change [5]. > Sounds good, hopefully we can figure out the issue with the reverted patch and get it landed. Thanks, -Alex > [1] > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > [2] https://review.openstack.org/#/c/573476/ > > [3] https://review.openstack.org/#/c/558972/ (landed but currently being > reverted) > > [4] https://review.openstack.org/#/c/581919/ (plus the series before it) > > [5] https://review.openstack.org/#/c/553104/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johnsomor at gmail.com Wed Jul 25 23:11:14 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 25 Jul 2018 16:11:14 -0700 Subject: [openstack-dev] [Ironic][Octavia][Congress] The usage of Neutron API In-Reply-To: <0957CD8F4B55C0418161614FEC580D6B2F9B9AC6@YYZEML701-CHM.china.huawei.com> References: <0957CD8F4B55C0418161614FEC580D6B2F9B9AC6@YYZEML701-CHM.china.huawei.com> Message-ID: Octavia is done. Thank you for the patch! Michael On Tue, Jul 24, 2018 at 8:35 AM Hongbin Lu wrote: > > Hi folks, > > > > Neutron has landed a patch to enable strict validation on query parameters when listing resources [1]. I tested the Neutorn’s change in your project’s gate and the result suggested that your projects would need the fixes [2][3][4] to keep the gate functioning. > > > > Please feel free to reach out if there is any question or concern. > > > > [1] https://review.openstack.org/#/c/574907/ > > [2] https://review.openstack.org/#/c/583990/ > > [3] https://review.openstack.org/#/c/584000/ > > [4] https://review.openstack.org/#/c/584112/ > > > > Best regards, > > Hongbin > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From soulxu at gmail.com Thu Jul 26 00:19:03 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 26 Jul 2018 08:19:03 +0800 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: 2018-07-25 17:44 GMT+08:00 Ghanshyam Mann : > Hi All, > > During today API office hour, we were discussing about keypair quota usage > bug (newton) [1]. key_pair 'in_use' quota is always 0 even when request per > user which is because it is being set as 0 always [2]. > > From checking the history and review discussion on [3], it seems that it > was like that from staring. key_pair quota is being counted when actually > creating the keypair but it is not shown in API 'in_use' field. Vishakha > (assignee of this bug) is currently planing to work on this bug and before > that we have few queries: > > 1. is it ok to show the keypair used info via API ? any original rational > not to do so or it was just like that from starting. > It doesn't make sense to show the usage when the user queries project quota, but it makes sense to show the usage when the user queries specific user quota. And we have no way to show usage for the server_group_memebers/security_group_rules, since they are the limit for a specific server group and security group, we have no way to express that in our quota API. > > 2. Because this change will show the keypair used quota information in > API's existing filed 'in_use', it is API behaviour change (not interface > signature change in backward incompatible way) which can cause interop > issue. Should we bump microversion for this change? > If we are going to bump microversion, I prefer to set the usage to -1 for server_group_member/security_group_rules usage, since 0 is really confuse for the end user. > > [1] https://bugs.launchpad.net/nova/+bug/1644457 > [2] https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de65205 > 4ac5ae1993/nova/quota.py#L189 > [3] https://review.openstack.org/#/c/446239/ > > -gmann > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu Jul 26 00:21:07 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 26 Jul 2018 08:21:07 +0800 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: 2018-07-26 0:29 GMT+08:00 William M Edmonds : > > Ghanshyam Mann wrote on 07/25/2018 05:44:46 AM: > ... snip ... > > 1. is it ok to show the keypair used info via API ? any original > > rational not to do so or it was just like that from starting. > > keypairs aren't tied to a tenant/project, so how could nova track/report a > quota for them on a given tenant/project? Which is how the API is > constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/ > detail > Keypairs usage is only value for the API 'GET /os-quota-sets/{tenant_id}/detail?user_id={user_id}' > > > > 2. Because this change will show the keypair used quota information > > in API's existing filed 'in_use', it is API behaviour change (not > > interface signature change in backward incompatible way) which can > > cause interop issue. Should we bump microversion for this change? > > If we find a meaningful way to return in_use data for keypairs, then yes, > I would expect a microversion bump so that callers can distinguish between > a) talking to an older installation where in_use is always 0 vs. b) talking > to a newer installation where in_use is 0 because there are really none in > use. Or if we remove keypairs from the response, which at a glance seems to > make more sense, that should also have a microversion bump so that someone > who expects the old response format will still get it. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu Jul 26 00:22:22 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 26 Jul 2018 08:22:22 +0800 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <5B58B6AE.1@windriver.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B58B6AE.1@windriver.com> Message-ID: 2018-07-26 1:43 GMT+08:00 Chris Friesen : > On 07/25/2018 10:29 AM, William M Edmonds wrote: > >> >> Ghanshyam Mann wrote on 07/25/2018 05:44:46 AM: >> ... snip ... >> > 1. is it ok to show the keypair used info via API ? any original >> > rational not to do so or it was just like that from starting. >> >> keypairs aren't tied to a tenant/project, so how could nova track/report >> a quota >> for them on a given tenant/project? Which is how the API is >> constructed... note >> the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail >> >> > 2. Because this change will show the keypair used quota information >> > in API's existing filed 'in_use', it is API behaviour change (not >> > interface signature change in backward incompatible way) which can >> > cause interop issue. Should we bump microversion for this change? >> >> If we find a meaningful way to return in_use data for keypairs, then yes, >> I >> would expect a microversion bump so that callers can distinguish between >> a) >> talking to an older installation where in_use is always 0 vs. b) talking >> to a >> newer installation where in_use is 0 because there are really none in >> use. Or if >> we remove keypairs from the response, which at a glance seems to make more >> sense, that should also have a microversion bump so that someone who >> expects the >> old response format will still get it. >> > > Keypairs are weird in that they're owned by users, not projects. This is > arguably wrong, since it can cause problems if a user boots an instance > with their keypair and then gets removed from a project. > > Nova microversion 2.54 added support for modifying the keypair associated > with an instance when doing a rebuild. Before that there was no clean way > to do it. I don't understand this, we didn't count the keypair usage with the instance together, we just count the keypair usage for specific user. > > > Chris > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jul 26 00:36:27 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 26 Jul 2018 09:36:27 +0900 Subject: [openstack-dev] [nova] API updates week 19-25 In-Reply-To: References: <164d0dbe0e0.fae56c84115824.4467269741317090143@ghanshyammann.com> Message-ID: <164d403fa0b.10f7f2f75135922.5537280384499407145@ghanshyammann.com> ---- On Wed, 25 Jul 2018 23:53:18 +0900 Surya Seetharaman wrote ---- > Hi! > On Wed, Jul 25, 2018 at 11:53 AM, Ghanshyam Mann wrote: > > 5. API Extensions merge work > - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky > - https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky > - Weekly Progress: part-1 of schema merge and part-2 of server_create merge has been merged for Rocky. 1 last patch of removing the placeholder method are on gate. > part-3 of view builder merge cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed this work to Stein. > > 6. Handling a down cell > - https://blueprints.launchpad.net/nova/+spec/handling-down-cell > - https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged) > - Weekly Progress: It is difficult to make it in Rocky? matt has open comment on patch about changing the service list along with server list in single microversion which make > sense. > > > ​The handling down cell spec related API changes will also be postponed to Stein since the view builder merge (part-3 of API Extensions merge work)​ is postponed to Stein. It would be more cleaner. Yeah, I will make sure view builder things gets in early in stein. I am going to push all remaining patches and make them ready for review once we have stein branch. -gmann > -- > > Regards, > Surya. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Thu Jul 26 00:47:23 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 26 Jul 2018 09:47:23 +0900 Subject: [openstack-dev] Lots of slow tests timing out jobs In-Reply-To: <5a2492fe-d3b5-f0e9-d2c6-8275c7566836@gmail.com> References: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> <5a2492fe-d3b5-f0e9-d2c6-8275c7566836@gmail.com> Message-ID: <164d40dfcc7.109bc49b9135939.1526212168494166251@ghanshyammann.com> ---- On Wed, 25 Jul 2018 22:22:24 +0900 Matt Riedemann wrote ---- > On 7/25/2018 1:46 AM, Ghanshyam Mann wrote: > > yeah, there are many tests taking too long time. I do not know the reason this time but last time we did audit for slow tests was mainly due to ssh failure. > > I have created the similar ethercalc [3] to collect time taking tests and then round figure of their avg time taken since last 14 days from health dashboard. Yes, there is no calculated avg time on o-h so I did not take exact avg time its round figure. > > > > May be 14 days is too less to take decision to mark them slow but i think their avg time since 3 months will be same. should we consider 3 month time period for those ? > > > > As per avg time, I have voted (currently based on 14 days avg) on ethercalc which all test to mark as slow. I taken the criteria of >120 sec avg time. Once we have more and more people votes there we can mark them slow. > > > > [3]https://ethercalc.openstack.org/dorupfz6s9qt > > Thanks for this. I haven't gone through all of the tests in there yet, > but noticed (yesterday) a couple of them were personality file compute > API tests, which I thought was strange. Do we have any idea where the > time is being spent there? I assume it must be something with ssh > validation to try and read injected files off the guest. I need to dig > into this one a bit more because by default, file injection is disabled > in the libvirt driver so I'm not even sure how these are running (or > really doing anything useful). That is set to True explicitly in tempest-full job [1] and then devstack set it True on nova. >Given we have deprecated personality > files in the compute API [1] I would definitely mark those as slow tests > so we can still run them but don't care about them as much. Make sense, +1. [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n56 -gmann > > [1] > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52 > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Arkady.Kanevsky at dell.com Thu Jul 26 01:55:13 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 26 Jul 2018 01:55:13 +0000 Subject: [openstack-dev] [tripleo] PTL non-candidacy In-Reply-To: References: Message-ID: <6d1b1cd1b1944ce98947bdfadd7b6f3f@AUSX13MPS308.AMER.DELL.COM> Indeed. Thanks Alex for your great leadership of TripleO. From: Remo Mattei [mailto:remo at rm.ht] Sent: Wednesday, July 25, 2018 4:09 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tripleo] PTL non-candidacy I want publically want to say THANK YOU Alex. You ROCK. Hopefully one of those summit, I will meet. Ciao, Remo On Jul 25, 2018, at 6:23 AM, Alex Schultz > wrote: Hey folks, So it's been great fun and we've accomplished much over the last two cycles but I believe it is time for me to step back and let someone else do the PTLing. I'm not going anywhere so I'll still be around to focus on the simplification and improvements that TripleO needs going forward. I look forwards to continuing our efforts with everyone. Thanks, -Alex __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [Image removed by sender.] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD101.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD101.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3164 bytes Desc: image001.jpg URL: From dtruong at blizzard.com Thu Jul 26 04:09:30 2018 From: dtruong at blizzard.com (Duc Truong) Date: Thu, 26 Jul 2018 04:09:30 +0000 Subject: [openstack-dev] [Senlin][PTL][Election] Candidacy for Senlin PTL for Stein Message-ID: Hello everyone, I'd like to announce my candidacy for the Senlin PTL position during the Stein cycle. I've been contributing to Senlin since the Queens cycle and became a core reviewer during the Rocky cycle. I work for Blizzard Entertainment where I'm an active operator and upstream developer for Senlin. I believe this dual role gives me a unique perspective on the use cases for Senlin. If elected as PTL, I will focus on the following priorities: * Testing: More integration tests are needed to avoid any regression due to new feature implementations. More rally tests are needed to cover stress testing scenarios in HA deployments of Senlin. * Bug fixes: Actively monitor incoming bug reports and triage them. Clean out old bugs that can no longer be reproduced. * Technical debt: Identify areas of code that can be reimplemented more efficiently and/or simplified. * User documentation: Restructure the Senlin documentation to make it easier for the users to find the relevant information. * Grow the Senlin community: My goal is to grow the Senlin user base and encourage more developers to contribute. To do so, I propose changing the weekly meetings to office hours and hold those office hours consistently so that new users and/or developers can ask questions and receive feedback. Moreover, I want to increase Senlin's visibility in the developer community by more actively using the mailing list. One idea would be to send out Senlin project updates to the mailing list throughout the cycle like many other projects are doing now. Thanks for your consideration. Duc Truong (dtruong) From cjeanner at redhat.com Thu Jul 26 05:37:20 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 26 Jul 2018 07:37:20 +0200 Subject: [openstack-dev] [tripleo] PTL candidacy for the Stein Cycle In-Reply-To: References: Message-ID: <0505b767-5eaf-6877-ff23-3e4cf2c489ab@redhat.com> +1 :). On 07/25/2018 02:03 PM, Juan Antonio Osorio wrote: > Hello folks! > > I'd like to nominate myself for the TripleO PTL role for the Stein cycle. > > Alex has done a great job as a PTL: The project is progressing nicely > with many > new, exciting features and uses for TripleO coming to fruition recently. > It's a > great time for the project. But, there's more work to be done. > > I have served the TripleO community as a core-reviewer for some years > now and, > more recently, by driving the Security Squad. This project has been a > great learning experience for me, both technically (I got to learn even > more of > OpenStack) and community-wise. Now I wish to better serve the community > further > by bringing my experiences into PTL role. While I have not yet served as PTL > for a project before,I'm eager to learn the ropes and help improve the > community that has been so influential on me. > > For Stein, I would like to focus on: > > * Increasing TripleO's usage in the testing of other projects >   Now that TripleO can deploy a standalone OpenStack installation, I hope it >   can be leveraged to add value to other projects' testing efforts. I > hope this >   would subsequentially help increase TripleO's testing coverage, and reduce >   the footprint required for full-deployment testing. > > * Technical Debt & simplification >   We've been working on simplifying the deployment story and battle > technical >   depth -- let’s keep  this momentum going.  We've been running (mostly) > fully >   containerized environments for a couple of releases now; I hope we can > reduce >   the number of stacks we create, which would in turn simplify the project >   structure (at least on the t-h-t side). We should also aim for the most >   convergence we can achieve (e.g. CLI and UI workflows). > > * CI and testing >   The project has made great progress regarding CI and testing; lets > keep this >   moving forward and get developers easier ways to bring up testing >   environments for them to work on and to be able to reproduce CI jobs. > > Thanks! > > Juan Antonio Osorio Robles > IRC: jaosorior > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From stdake at cisco.com Thu Jul 26 05:40:02 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 26 Jul 2018 05:40:02 +0000 Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: References: Message-ID: <1532583603875.79091@cisco.com> ?Jeffrey, Thanks for your excellent service as Kolla PTL. You have served the Kolla community well. Regards, -steve ________________________________ From: Jeffrey Zhang Sent: Tuesday, July 24, 2018 8:48 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [kolla] ptl non candidacy Hi all, I just wanna to say I am not running PTL for Stein cycle. I have been involved in Kolla project for almost 3 years. And recently my work changes a little, too. So I may not have much time in the community in the future. Kolla is a great project and the community is also awesome. I would encourage everyone in the community to consider for running. Thanks for your support :D. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Thu Jul 26 05:41:12 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Thu, 26 Jul 2018 08:41:12 +0300 Subject: [openstack-dev] [tripleo] Setting swift as glance backend Message-ID: Hi, I would like to deploy a small overcloud with just one controller and one compute for testing. I want to use swift as the glance backend. How do I configure the overcloud templates? Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Thu Jul 26 05:47:03 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 26 Jul 2018 07:47:03 +0200 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> References: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> Message-ID: <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> Hello Sam, Thanks for the clarifications. On 07/25/2018 07:46 PM, Sam Doran wrote: > I spoke with other Ansible Core devs to get some clarity on this change. > > This is not a change that is being made quickly, lightly, or without a > whole of bunch of reservation. In fact, that PR created by agaffney may > not be merged any time soon. He just wanted to get something started and > there is still ongoing discussion on that PR. It is definitely a WIP at > this point. > > The main reason for this change is that pretty much all of the Ansible > CVEs to date came from "fact injection", meaning a fact that contains > executable Python code Jinja will merrily exec(). Vars, hostvars, and > facts are different in Ansible (yes, this is confusing — sorry). All > vars go through a templating step. By copying facts to vars, it means > facts get templated controller side which could lead to controller > compromise if malicious code exists in facts. > > We created an AnsibleUnsafe class to protect against this, but stopping > the practice of injecting facts into vars would close the door > completely. It also alleviates some name collisions if you set a hostvar > that has the same name as a var. We have some methods that filter out > certain variables, but keeping facts and vars in separate spaces is much > cleaner. > > This also does not change how hostvars set via set_fact are referenced. > (set_fact should really be called set_host_var). Variables set with > set_fact are not facts and are therefore not inside the ansible_facts > dict. They are in the hostvars dict, which you can reference as {{ > my_var }} or {{ hostvars['some-host']['my_var'] }} if you need to look > it up from a different host. so if, for convenience, we do this: vars: a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}" That's completely acceptable and correct, and won't create any security issue, right? > > All that being said, the setting to control this behavior as Emilien > pointed out is inject_facts_as_vars, which defaults to True and will > remain that way for the foreseeable future. I would not rush into > changing all the fact references in playbooks. It can be a gradual process. > > Setting inject_facts_as_vars toTrue means ansible_hostname becomes > ansible_facts.hostname. You do not have to use the hostvars dictionary — > that is for looking up facts about hosts other than the current host. > > If you wanted to be proactive, you could start using the ansible_facts > dictionary today since it is compatible with the default setting and > will not affect others trying to use playbooks that reference ansible_facts. > > In other words, with the default setting of True, you can use either > ansible_hostname or ansible_facts.hostname. Changing it to False means > only ansible_facts.hostname is defined. > >> Like, really. I know we can't really have a word about that kind of >> decision, but... damn, WHY ?! > > That is most certainly not the case. Ansible is developed in the open > and we encourage community members to attend meetings >  and add > topics to the agenda >  for discussion. > Ansible also goes through a proposal process for major changes, which > you can view here > . > > You can always go to #ansible-devel on Freenode or start a discussion on > the mailing list >  to speak with > the Ansible Core devs about these things as well. And I also have the "Because" linked to my "why" :). big thanks! Bests, C. > > --- > > Respectfully, > > Sam Doran > Senior Software Engineer > Ansible by Red Hat > sdoran at redhat.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jtomasek at redhat.com Thu Jul 26 08:31:56 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Thu, 26 Jul 2018 10:31:56 +0200 Subject: [openstack-dev] [tripleo] FFE request for config-download-ui Message-ID: Hello, I would like to request a FFE for [1]. Current status of TripleO UI patches is here [2] there are last 2 patches pending review which currently depend on [3] which is close to land. [1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/ [2] https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui [3] https://review.openstack.org/#/c/583293/ Thanks -- Jiri -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Thu Jul 26 08:58:22 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Thu, 26 Jul 2018 11:58:22 +0300 Subject: [openstack-dev] [tripleo] network isolation can't find files referred to on director In-Reply-To: References: Message-ID: Hi James, I understand the network-environment.yaml will also be generated. What do you mean by rendered path? Will it be "usr/share/openstack-tripleo-heat-templates/network/ports/"? By the way I didn't find any other place in my templates where I refer to these files? What about custom nic configs is there also a jinja2 process to create them? Samuel On Thu, Jul 26, 2018 at 12:02 AM James Slagle wrote: > On Wed, Jul 25, 2018 at 11:56 AM, Samuel Monderer > wrote: > > Hi, > > > > I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens) > > In my network-isolation I refer to files that do not exist anymore on the > > director such as > > > > OS::TripleO::Compute::Ports::ExternalPort: > > /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml > > OS::TripleO::Compute::Ports::InternalApiPort: > > > /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml > > OS::TripleO::Compute::Ports::StoragePort: > > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml > > OS::TripleO::Compute::Ports::StorageMgmtPort: > > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml > > OS::TripleO::Compute::Ports::TenantPort: > > /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml > > OS::TripleO::Compute::Ports::ManagementPort: > > > /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml > > > > Where have they gone? > > These files are now generated from network/ports/port.network.j2.yaml > during the jinja2 template rendering process. They will be created > automatically during the overcloud deployment based on the enabled > networks from network_data.yaml. > > You still need to refer to the rendered path (as shown in your > example) in the various resource_registry entries. > > This work was done to enable full customization of the created > networks used for the deployment. See: > > https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html > > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhu.bingbing at 99cloud.net Thu Jul 26 09:03:21 2018 From: zhu.bingbing at 99cloud.net (zhubingbing) Date: Thu, 26 Jul 2018 17:03:21 +0800 (CST) Subject: [openstack-dev] [kolla] ptl non candidacy In-Reply-To: References: Message-ID: <3dd74f5.5b00.164d5d40c29.Coremail.zhu.bingbing@99cloud.net> Thanks for your work as PTL during the Rocky cycle Jeffrey Cheers, zhubingbing 在 2018-07-25 11:48:24,"Jeffrey Zhang" 写道: Hi all, I just wanna to say I am not running PTL for Stein cycle. I have been involved in Kolla project for almost 3 years. And recently my work changes a little, too. So I may not have much time in the community in the future. Kolla is a great project and the community is also awesome. I would encourage everyone in the community to consider for running. Thanks for your support :D. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From work at seanmooney.info Thu Jul 26 11:22:45 2018 From: work at seanmooney.info (Sean Mooney) Date: Thu, 26 Jul 2018 12:22:45 +0100 Subject: [openstack-dev] [infra][nova] Running NFV tests in CI In-Reply-To: <1532458038.3552690.1451538240.310285D1@webmail.messagingengine.com> References: <1532449800.2752809.1451389112.2DD9BA8A@webmail.messagingengine.com> <1532458038.3552690.1451538240.310285D1@webmail.messagingengine.com> Message-ID: On 24 July 2018 at 19:47, Clark Boylan wrote: > > On Tue, Jul 24, 2018, at 10:21 AM, Artom Lifshitz wrote: > > On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan wrote: > > > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote: > > >> Hey all, > > >> > > >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI > > >> > > >> Intel has their NFV tests tempest plugin [1] and manages a third party > > >> CI for Nova. Two of the cores on that project (Stephen Finucane and > > >> Sean Mooney) have now moved to Red Hat, but the point still stands > > >> that there's a need and a use case for testing things like NUMA > > >> topologies, CPU pinning and hugepages. > > >> > > >> At Red Hat, we also have a similar tempest plugin project [2] that we > > >> use for downstream whitebox testing. The scope is a bit bigger than > > >> just NFV, but the main use case is still testing NFV code in an > > >> automated way. > > >> > > >> Given that there's a clear need for this sort of whitebox testing, I > > >> would like to humbly request a handful of nodes (in the 3 to 5 range) > > >> from infra to run an "official" Nova NFV CI. The code doing the > > >> testing would initially be the current Intel plugin, bug we could have > > >> a separate discussion about keeping "Intel" in the name or forking > > >> and/or renaming it to something more vendor-neutral. > > > > > > The way you request nodes from Infra is through your Zuul configuration. Add jobs to a project to run tests on the node labels that you want. > > > > Aha, thanks, I'll look into that. I was coming from a place of > > complete ignorance about infra. > > > > > > I'm guessing this process doesn't work for NFV tests because you have specific hardware requirements that are not met by our current VM resources? > > > If that is the case it would probably be best to start by documenting what is required and where the existing VM resources fall > > > short. > > > > Well, it should be possible to do most of what we'd like with nested > > virt and virtual NUMA topologies, though things like hugepages will > > need host configuration, specifically the kernel boot command [1]. Is > > that possible with the nodes we have? > > https://docs.openstack.org/infra/manual/testing.html attempts to give you an idea for what is currently available via the test environments. > > > Nested virt has historically been painful because not all clouds support it and those that do did not do so in a reliable way (VMs and possibly hypervisors would crash). This has gotten better recently as nested virt is something more people have an interest in >getting working but it is still hit and miss particularly as you use newer kernels in guests. I think if we can continue to work together with our clouds (thank you limestone, OVH, and vexxhost!) we may be able to work out nested virt that is redundant across multiple >clouds. We will likely need individuals willing to keep caring for that though and debug problems when the next release of your favorite distro shows up. Can you get by with qemu or is nested virt required? for what its worth the intel nfv ci has alway ran with nested virt since we first set it up on ubuntu 12.04 all the way through the time we ran it fedora 20- fedora 21 and it continue to use nested virt on ubuntu 16.04 we have never had any issue with nested virt but the key to using it correctly is you should always set the nova cpu mode to host-passthrough if you use nested virt. because of how we currently do cpu pinning/ hugepanges and numa affinity in nova today to do this testign we have a hard requiremetn on running kvm in devstack which mean we have a hard requirement for nested virt. there ware ways around that but the nova core team has previously express there view that adding the code changes reqiured to allow the use of qemu is not warrented for ci since we would also not be testing the normal config e.g. these feature are normaly only used when performance matters whcih means you will be useing kvm not qemu. i have tried to test ovs-dpdk in the upstream ci on 3 ocation in the past (this being the most recent https://review.openstack.org/#/c/433491/) but without nested virt that didnt get very far. > > As for hugepages, I've done a quick survey of cpuinfo across our clouds and all seem to have pse available but not all have pdpe1gb available. Are you using 1GB hugepages? Keep in mind that the test VMs only have 8GB of memory total. As for booting with special kernel parameters you can have your job make those modifications to the test environment then reboot the test environment within the job. There is some Zuul specific housekeeping that needs to be done post reboot, we can figure that out if we decide to go down this route. Would your setup work with 2M hugepages? the host vm does not need to be backed by hungepages at all. it does need to have the cpuflags set to allow hugepages to be allocated within the host vm. we can test 98% of thing with 2mb hugepages so we do not need to reboot or allocate them via the kernel commandline as we can allocate 2mb hugepages at runtime. networking-ovs-dpdk does this for the nfv ci today and it works fine for testing everything except booting vms with 1G pages. the nova code paths we want to test are identical for 2MB hugepages and 1G so we really done need to boot vms with 1G hugepages inside the host vm. to be able to test that we can boot vms with the pdpe1gb cpu flag set we would need a host vm that has that flag set but we could make that a condtional flag. again the logic for setting an extra cpu flag is idetical regarless of the flag so we can proably test that usecase with another flag such as PCID the host vm whould however need to be a mulit numa node vm (specifically at least 2 numa nodes) and because of the kvm requirement for our gust vm we would need to also have the host vm running on kvm. in theory you are ment to be able to run kvm inside xen but that has never worked form me and geting nested virt to work with different hyperviors is likely more hastle then it is worth. for simplity the intel nfv ci was configured as follow. 1 phyical compute nodes had either 40 cores (2 HT*2sockets*10cores ivybridge) or 72 cores (2 HT*2sockets*18) and either 64 or 96 GB of ram (cpus are cheap at intel but ram is expensive) 2 each host vm had 2 numa nodes (hw:numa_nodes=2) 3 each host vm had 2 sockets and 2 hyper thread per core and 16-20 vcpus (hw:cpu_sockets=2,hw:cpu_threads=2) 4 each host vm was not pinned on the host (hw:cpu_policy=shared) this was to allow over subscption of cpus 5 the host was configured with cpu_mode=host-passthough to allow compiling dpdk with optimization enabled and to prevent any issues with nested virt. the last 2 iteration of the ci(the one that has been runing for the last 2 years) ran against a kolla-ansible deployed mitaka/newton openstack using kernel ovs on ubuntu 16.04 so the host cloud does not need to be partcaly new to support all the features. we learned over time that while not strictly required based on the workload of running a ci using hugepages for the host vm prevent memory fragmentation and since we disallowed over subsciption of memory in the cloud it gave use a performce boost without reducing or capasity. as a result we started to move the host vms to be hugepage backed but that was just an optimization not a requirement. the guest vms do not need to have 16-20 core by the way. that just allowed use to run more tempest tests in paralell we intially had everything running in 8 core vms but we had to run tempest without paralleisum and had to do some tweeks to the ovs-dpdk configuration to make it use only 1 core instea of the 4 it would have used in the host vm by default. > > > > > > In general though we operate on top of donated cloud resources, and if those do not work we will have to identify a source of resources that would work. > > > > Right, as always it comes down to resources and money. I believe > > historically Red Hat has been opposed to running an upstream third > > party CI (this is by no means an official Red Hat position, just > > remembering what I think I heard), but I can always see what I can do. the intel nfv ci litrally came about be case intel and rehat were collaberating on these features and there was a direct request from the nova core team at the paris summit to have ci before they could be accepted. my team at intel were heiring at the time and we had just ordered in some new server for that so intel volenterreed to run an nfv ci. i grabed one of our 3 node dev clusters gave them to waldek with the 2 node that ran our nightly build and that became the first intel nfv ci 4 years ago runing out of one of our dev labs with ovh web hosting for logs. the last iteration of the nfv ci that i was directly respocible for ran on 10 servers. it tested every patch to nova,os-vif,neutron,devstack,networking-ovs-dpdk,devstack-libvirt-qemu plugin repo and our colled openstack plugins with a target of reporting back in less then 2 hours. it was also ment to run on all changes to networking-odl and networking-ovn but that never got implemented before we tanstioning it out of my old team. if we wanted to cover nova/ovs-cif/devstack and networking-ovs-dpdk that could likely be handeled by just 3-4 64GB 40core servers though the latency would likely be higher. any server hardware that is nehalem or newer has all fo the feature needed to test these feautres so that basically any dual socket x86 server produced since late 2008. But if we could use vms from cloud providers that supports the main ci that would be an even better solution long term. maintaining an openstack cloud under ci load is non trival. we had always wanted to upstream the ci when posibel but1 nested virt and mult numa guests were always the blocker. regards sean > > > > [1] > > https://docs.openstack.org/nova/latest/admin/huge-pages.html#enabling-huge-pages-on-the-host > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Thu Jul 26 12:19:22 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 26 Jul 2018 21:19:22 +0900 Subject: [openstack-dev] [QA][PTL][Election] Quality Assurance PTL Candidacy for Stein Message-ID: <164d6878314.d6052032148997.2043805957013499432@ghanshyammann.com> Hi Everyone, I would like to announce my candidacy to continue the Quality Assurance PTL role for Stein cycle. I have served as QA PTL in Rocky cycle and as first time being PTL role, it was great experience for me. I have been doing my best effort in Rocky and made sure that we continued serving the QA responsibility in better way and also Improving the many things in QA like new feature test coverage, docs, Tracking Process etc. In Rocky, QA team has successfully executed many of the targeted working items. Few items and things went well are as below:- * Zuul v3 migration and base job available for cross project use. * Running volume v3 API as default in gate testing. Along with that running a single job for v2 API for compatibility checks. * Tempest plugins release process to map with Tempest releases. * Improving the test coverage and service clients. * Releasing sub project like hacking and fix the version issues, projects were facing on every hacking release. * Completing compute microversion response schema gaps in Tempest. * Finishing more and more work in Patrole to make it towards stable release like documentation, more coverage etc. * We are able to continue serving in good term irrespective of resource shortage in QA. * Supporting projects for testing and fixes to continue their development. Apart from above accomplishment, there are still a lot of improvements needed (listed below) and I will try my best to execute the same in next Stein cycle. * Tempest CLI unit test coverage and switching gate job to use all of them. This will help to avoid regression in CLI. * Tempest scenario manage refactoring which is still in messy state and hard to debug. * no progress on QA SIG which will help us to share/consume the QA tooling across communities. * no progress on Destructive testing (Eris) projects. * Plugins cleanup to improve the QA interface usage. * Bug Triage, Our targets was to continue the New bugs count as low which did not went well in Rocky. All the momentum and activities rolling are motivating me to continue another term as QA PTL in order to explore and showcase more challenges. Along with that let me summarize my goals and focus area for Stein cycle: * Continue working on backlogs from above list and finish them based on priority. * Help the Projects' developments with test writing/improvement and gate stability * Plugin improvement and helping them on everything they need from QA. This area need more process and collaboration with plugins team. * Try best to have progress on Eris project. * Start QA SIG to help cross community collaboration. * Bring on more contributor and core reviewers. Thanks for reading and consideration my candidacy for Stein cycle. -gmann From sean.mcginnis at gmx.com Thu Jul 26 12:22:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Jul 2018 07:22:01 -0500 Subject: [openstack-dev] [release] Release countdown for week R-4, July 30 - August 3 Message-ID: <20180726122200.GA28007@sm-workstation> Hey, I thought you guys might be interested in this release countdown info. ;) Development Focus ----------------- The R-4 week is our one deadline free week between the lib freezes and Rocky-3 milestone and RC. Work should be focused on fixing any requirements update issues, critical bugs, and wrapping up feature work to prepare for the Release Candidate deadline (for deliverables following the with-milestones model) or final Rocky releases (for deliverables following the with-intermediary model) next Thursday, 9th of August. General Information ------------------- For deliverables following the cycle-with-milestones model, we are now (after the day I send this) past Feature Freeze. The focus should be on determining and fixing release-critical bugs. At this stage only bugfixes should be approved for merging in the master branches: feature work should only be considered if explicitly granted a Feature Freeze exception by the team PTL (after a public discussion on the mailing-list). StringFreeze is now in effect, in order to let the I18N team do the translation work in good conditions. The StringFreeze is currently soft (allowing exceptions as long as they are discussed on the mailing-list and deemed worth the effort). It will become a hard StringFreeze on 9th of August along with the RC. The requirements repository is also frozen, until all cycle-with-milestones deliverables have produced a RC1 and have their stable/rocky branches. If release critical library or client library releases are needed for Rocky past the freeze dates, you must request a Feature Freeze Exception (FFE) from the requirements team before we can do a new release to avoid having something released in Rocky that is not actually usable. This is done by posting to the openstack-dev mailing list with a subject line similar to: [$PROJECT][requirements] FFE requested for $PROJECT_LIB Include justification/reasoning for why a FFE is needed for this lib. If/when the requirements team OKs the post-freeze update, we can then process a new release. Including a link to the FFE in the release request is not required, but would be helpful in making sure we are clear to do a new release. Note that deliverables that are not tagged for release by the appropriate deadline will be reviewed to see if they are still active enough to stay on the official project list. Actions --------- stable/rocky branches should be created soon for all not-already-branched libraries. You should expect 2-3 changes to be proposed for each: a .gitreview update, a reno update (skipped for projects not using reno), and a tox.ini constraints URL update*. Please review those in priority so that the branch can be functional ASAP. * The constraints update patches should not be approved until a stable/rocky branch has been created for openstack/requirements. Watch for an unfreeze announcement from the requirements team for this. For cycle-with-intermediary deliverables, release liaisons should consider releasing their latest version, and creating stable/rocky branches from it ASAP. For cycle-with-milestones deliverables, release liaisons should wait until R-3 week to create RC1 (to avoid having an RC2 created quickly after). Review release notes for any missing information, and start preparing "prelude" release notes as summaries of the content of the release so that those are merged before the first release candidate. *Release Cycle Highlights* Along with the prelude work, it is also a good time to start planning what highlights you want for your project team in the cycle highlights: Background on cycle-highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Project Team Guide, Cycle-Highlights: https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights anne [at] openstack.org/annabelleB on IRC is available if you need help selecting or writing your highlights For release-independent deliverables, release liaisons should check that their deliverable file includes all the existing releases, so that they can be properly accounted for in the releases.openstack.org website. If your team has not done so, remember to file Rocky goal completion information, as explained in: https://governance.openstack.org/tc/goals/index.html#completing-goals Upcoming Deadlines & Dates -------------------------- PTL self-nomination ends: July 31 PTL election starts: August 1 RC1 deadline: August 9 -- Sean McGinnis (smcginnis) From mordred at inaugust.com Thu Jul 26 12:30:35 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 26 Jul 2018 08:30:35 -0400 Subject: [openstack-dev] [sdk] PTL Candidacy for the Stein cycle Message-ID: Hi everybody! I'd like to run for PTL of OpenStackSDK again. This last cycle was great. os-client-config is now just a thin wrapper around openstacksdk. shade still has a bunch of code, but the shade OpenStackCloud object is a subclass of openstack.connection.Connection, so we're in good position to turn shade into a thin wrapper. Ansible and nodepool are now using openstacksdk directly rather than shade and os-client-config. python-openstackclient is also now using openstacksdk for config instead of os-client-config. We were able to push some of the special osc code down into keystoneauth so that it gets its session directly from openstacksdk now too. We plumbed os-service-types in to the config layer so that people can use any of the official aliases for a service in their config. Microversion discovery was added - and we actually even are using it for at least one method (way to be excited, right?) I said last time that we needed to get a 1.0 out during this cycle and we did not accomplish that. Moving forward my number one priority for the Stein cycle is to get the 1.0 release cut, hopefully very early in the cycle. We need to finish plumbing discovery through everywhere, and we need to rationalize the Resource objects and the shade munch objects. As soon as those two are done, 1.0 here we come. After we've got a 1.0, I think we should focus on getting python-openstackclient starting to use more of openstacksdk. I'd also like to start getting services using openstacksdk so that we can start reducing the number of moving parts everywhere. We have cross-testing with the upstream Ansible modules. We should move the test playbooks themselves out of the openstacksdk repo and into the Ansible repo. The caching layer needs an overhaul. What's there was written with nodepool in mind, and is **heavily** relied on in the gate. We can't break that, but it's not super friendly for people who are not nodepool (which is most people) I'd like to start moving methods from the shade layer into the sdk proxy layer and, where it makes sense, make the shade layer simple passthrough calls to the proxy layer. We really shouldn't have two different methods for uploading images to a cloud, for instance. Finally, we have some AMAZING docs - but with the merging of shade and os-client-config the overview leaves much to be desired in terms of leading people towards making the right choices. It would be great to get that cleaned up. I'm sure there will be more things to do too. There always are. In any case, I'd love to keep helping to pushing these rocks uphill. Thanks! Monty From mordred at inaugust.com Thu Jul 26 12:40:55 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 26 Jul 2018 08:40:55 -0400 Subject: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project! In-Reply-To: References: <1531849678-sup-8719@lrrr.local> Message-ID: <550b9d7b-43bb-7d35-2079-d7c3e7ce4434@inaugust.com> On 07/17/2018 08:19 PM, Adrian Turjak wrote: > Thanks! > > As the current project lead for Adjutant I welcome the news, and while I > know it wasn't an easy process would like to thank everyone involved in > the voting. All the feedback (good and bad) will be taken on board to > make the service as suited for OpenStack as possible in the space we've > decided it can fit. > > Now to onboarding, choosing a suitable service type, and preparing for a > busy Stein cycle! Welcome! I believe you're already aware, but once you have chosen a service type, make sure to submit a patch to https://git.openstack.org/cgit/openstack/service-types-authority > On 18/07/18 05:52, Doug Hellmann wrote: >> The Adjutant team's application [1] to become an official project >> has been approved. Welcome! >> >> As I said on the review, because it is past the deadline for Rocky >> membership, Adjutant will not be considered part of the Rocky >> release, but a future release can be part of Stein. >> >> The team should complete the onboarding process for new projects, >> including holding PTL elections for Stein, setting up deliverable >> files in the openstack/releases repository, and adding meeting >> information to eavesdrop.openstack.org. >> >> I have left a comment on the patch setting up the Stein election >> to ask that the Adjutant team be included. We can also add Adjutant >> to the list of projects on docs.openstack.org for Stein, after >> updating your publishing job(s). >> >> Doug >> >> [1] https://review.openstack.org/553643 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From d.krol at samsung.com Thu Jul 26 13:56:05 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Thu, 26 Jul 2018 15:56:05 +0200 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: References: Message-ID: <20180726135606eucas1p20806be956e7c195644e8aa65f3c85430~E77yY7Tyo2143821438eucas1p2G@eucas1p2.samsung.com> Hello All, as a member of Samsung R&D Center in Krakow, I would like to confirm that we are very interested in Trove development. We also notices that Trove project has a small team now and that community around Trove becomes smaller with each release which is a shame since it is a great project. That is why we would like to step up and help with development and leadership. We started our contribution with code reviews and we also submitted our first contributions to trove-tempest-plugin. We intend to increase our involvement in the community but we understand it need to take some time and help from community. I would like to thank current Trove team for warm welcome, and I'm really looking forward to the future collaboration with the community. Kind regards, Dariusz Krol On 07/25/2018 06:18 PM, 赵超 wrote: > cc to the Trove team members and guys from Samsung R&D Center in > Krakow, Poland privately, so anyone of them who are not reading the ML > could also be notified. > > On Thu, Jul 26, 2018 at 12:09 AM, 赵超 > wrote: > > Hi All, > > Trove currently has a really small team, and all the active team > members are from China, we had some good discussions during the > Rocky online PTG meetings[1], and the goals were arranged and > priorited [2][3]. But it's sad that none of us could focus on the > project, and the number of patches and reviews fall a lot in this > cycle comparing Queens. > > [1] https://etherpad.openstack.org/p/trove-ptg-rocky > > [2] > https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking > > [3] > https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv-SvYIrG4NLh4nWejupxqdeg/edit#gid=0 > > > And for me, it's a really great chance to play as the PTL role of > Trove, and I learned a lot during this cycle(from Trove projects > to the CI infrastrues, and more). However in this cycle, I have > been with no bandwith to work on the project for months, and the > situation seems not be better in the forseeable future, so I think > it's better to transfter the leadership, and look for opportunites > for more anticipations in the project. > > A good news is recently a team from Samsung R&D Center in Krakow, > Poland joined us, they're building a product on OpenStack, have > done improvments on Trove(internally), and now interested in > contributing to the community, starting by migrating the > intergating tests to the tempest plugin. They're also willing and > ready to act as the PTL role. The only problem for their > nomination may be that none of them have a patched merged into the > Trove projects. There're some in the trove-tempest-plugin waiting > review, but according to the activities of the project, these > patches may need a long time to merge (and we're at Rocky > milestone-3, I think we could merge patches in the > trove-tempest-plugin, as they're all abouth testing). > > I also hope and welcome the other current active team members of > Trove could nominate themselves, in that way, we could get more > discussions about how we think about the direction of Trove. > > I'll stll be here, to help the migration of the integration tests, > CentOS guest images support, Cluster improvement and all other > goals we discussed before, and code review. > > Thanks. > > -- > To be free as in freedom. > > > > > -- > To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu Jul 26 14:28:56 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 26 Jul 2018 10:28:56 -0400 Subject: [openstack-dev] [glance] FFE for multihash Message-ID: I'm asking for a Feature Freeze Exception for the glance-side work for the Secure Hash Algorithm Support (multihash) feature [0]. The work is underway and should be completed early next week. cheers, brian [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multihash.html From akekane at redhat.com Thu Jul 26 14:35:04 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 26 Jul 2018 20:05:04 +0530 Subject: [openstack-dev] [glance] FFE for multi-backend Message-ID: I'm asking for a Feature Freeze Exception for Multiple backend support (multi-store) feature [0]. The only remaining work is a versioning patch to flag this feature as experimental and should be completed early next week. ​[0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multi-store.html Patches open for review: https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:bp/multi-store​ Th​ anks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Jul 26 14:48:51 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Jul 2018 09:48:51 -0500 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: References: Message-ID: <20180726144850.GA4574@sm-workstation> > > > > A good news is recently a team from Samsung R&D Center in Krakow, Poland > > joined us, they're building a product on OpenStack, have done improvments > > on Trove(internally), and now interested in contributing to the community, > > starting by migrating the intergating tests to the tempest plugin. They're > > also willing and ready to act as the PTL role. The only problem for their > > nomination may be that none of them have a patched merged into the Trove > > projects. There're some in the trove-tempest-plugin waiting review, but > > according to the activities of the project, these patches may need a long > > time to merge (and we're at Rocky milestone-3, I think we could merge > > patches in the trove-tempest-plugin, as they're all abouth testing). > > > > I also hope and welcome the other current active team members of Trove > > could nominate themselves, in that way, we could get more discussions about > > how we think about the direction of Trove. > > Great to see another group getting involved! It's too bad there hasn't been enough time to build up some experience working upstream and getting at least a few more commits under their belt, but this sounds like things are heading in the right direction. Since the new folks are still so new - if this works for you - I would recommend continuing on as the official PTL for one more release, but with the understanding that you would just be around to answer questions and give advice to help the new team get up to speed. That should hopefully be a small time commitment for you while still easing that transition. Then hopefully by the T release it would not be an issue at all for someone else to step up as the new PTL. Or even if things progress well, you could step down as PTL at some point during the Stein cycle if someone is ready to take over for you. Just a suggestion to help ease the process. Sean From openstack at nemebean.com Thu Jul 26 14:50:18 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 26 Jul 2018 09:50:18 -0500 Subject: [openstack-dev] [tripleo] Setting swift as glance backend In-Reply-To: References: Message-ID: It looks like Glance defaults to Swift, so you shouldn't need to do anything: https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/glance-api.yaml#L96 On 07/26/2018 12:41 AM, Samuel Monderer wrote: > Hi, > > I would like to deploy a small overcloud with just one controller and > one compute for testing. > I want to use swift as the glance backend. > How do I configure the overcloud templates? > > Samuel > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Thu Jul 26 15:14:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 10:14:04 -0500 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: On 5/13/2018 9:06 PM, Ghanshyam Mann wrote: >> +1 on idea. As of now slow marked tests are from nova, cinder and >> neutron scenario tests and 2 API swift tests only [4]. I agree that >> making a generic job in tempest is better for maintainability. We can >> use existing job for that with below modification- >> - We can migrate >> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job >> zuulv3 in tempest repo >> - We can see if we can move migration tests out of it and use >> "nova-live-migration" job (in tempest check pipeline ) which is much >> better in live migration env setup and controlled by nova. >> - then it can be name something like >> "tempest-scenario-multinode-lvm-multibackend". >> - run this job in nova, cinder, neutron check pipeline instead of experimental. > Like this -https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job > > That makes scenario job as generic with running all scenario tests > including slow tests with concurrency 2. I made few cleanup and moved > live migration tests out of it which is being run by > 'nova-live-migration' job. Last patch making this job as voting on > tempest side. > > If looks good, we can use this to run on project side pipeline as voting. > > -gmann > I should have said something earlier, but I've said it on my original nova change now: https://review.openstack.org/#/c/567697/ What was implemented in Tempest isn't really at all what I was going for, especially since it doesn't run the API tests marked 'slow'. All I want is a job like tempest-full (which excludes slow tests) to be tempest-full which *only* runs slow tests. They would run a mutually exclusive set of tests so we have that coverage. I don't care if the scenario tests are run in parallel or serial (it's probably best to start in serial like tempest-full today and then change to parallel later if that settles down). But I think it's especially important given: https://review.openstack.org/#/c/567697/2 That we have a job which only runs slow tests because we're going to be marking more tests as "slow" pretty soon and we don't need the overlap with the existing tests that are run in tempest-full. -- Thanks, Matt From chris.friesen at windriver.com Thu Jul 26 15:19:38 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 26 Jul 2018 09:19:38 -0600 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: <5B59E68A.8060209@windriver.com> On 07/25/2018 06:21 PM, Alex Xu wrote: > > > 2018-07-26 0:29 GMT+08:00 William M Edmonds >: > > > Ghanshyam Mann > > wrote on 07/25/2018 05:44:46 AM: > ... snip ... > > 1. is it ok to show the keypair used info via API ? any original > > rational not to do so or it was just like that from starting. > > keypairs aren't tied to a tenant/project, so how could nova track/report a > quota for them on a given tenant/project? Which is how the API is > constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail > > > Keypairs usage is only value for the API 'GET > /os-quota-sets/{tenant_id}/detail?user_id={user_id}' The objection is that keypairs are tied to the user, not the tenant, so it doesn't make sense to specify a tenant_id in the above query. And for Pike at least I think the above command does not actually show how many keypairs have been created by that user...it still shows zero. Chris From chris.friesen at windriver.com Thu Jul 26 15:22:55 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 26 Jul 2018 09:22:55 -0600 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B58B6AE.1@windriver.com> Message-ID: <5B59E74F.3070704@windriver.com> On 07/25/2018 06:22 PM, Alex Xu wrote: > > > 2018-07-26 1:43 GMT+08:00 Chris Friesen >: > Keypairs are weird in that they're owned by users, not projects. This is > arguably wrong, since it can cause problems if a user boots an instance with > their keypair and then gets removed from a project. > > Nova microversion 2.54 added support for modifying the keypair associated > with an instance when doing a rebuild. Before that there was no clean way > to do it. > > > I don't understand this, we didn't count the keypair usage with the instance > together, we just count the keypair usage for specific user. I was giving an example of why it's strange that keypairs are owned by users rather than projects. (When instances are owned by projects, and keypairs are used to access instances.) Chris From melwittt at gmail.com Thu Jul 26 15:37:38 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 26 Jul 2018 08:37:38 -0700 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <5B59E68A.8060209@windriver.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B59E68A.8060209@windriver.com> Message-ID: <3c57ed20-f05b-a743-9b01-f834c0f74e6a@gmail.com> On Thu, 26 Jul 2018 09:19:38 -0600, Chris Friesen wrote: > On 07/25/2018 06:21 PM, Alex Xu wrote: >> >> >> 2018-07-26 0:29 GMT+08:00 William M Edmonds > >: >> >> >> Ghanshyam Mann > >> wrote on 07/25/2018 05:44:46 AM: >> ... snip ... >> > 1. is it ok to show the keypair used info via API ? any original >> > rational not to do so or it was just like that from starting. >> >> keypairs aren't tied to a tenant/project, so how could nova track/report a >> quota for them on a given tenant/project? Which is how the API is >> constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail >> >> >> Keypairs usage is only value for the API 'GET >> /os-quota-sets/{tenant_id}/detail?user_id={user_id}' > > The objection is that keypairs are tied to the user, not the tenant, so it > doesn't make sense to specify a tenant_id in the above query. > > And for Pike at least I think the above command does not actually show how many > keypairs have been created by that user...it still shows zero. Yes, for Pike during the re-architecting of quotas to count resources instead of tracking usage separately, we kept the "always zero" count for usage of keypairs, server group members, and security group rules, so as not to change the behavior. It's been my understanding that we would need a microversion to change any of those to actually return a count. It's true the counts would not make sense under the 'tenant_id' part of the URL though. -melanie From cdent+os at anticdent.org Thu Jul 26 16:15:03 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 26 Jul 2018 17:15:03 +0100 (BST) Subject: [openstack-dev] [nova] [placement] compute nodes use of placement Message-ID: HTML: https://anticdent.org/novas-use-of-placement.html A year and a half ago I did some analysis on how [nova uses placement](http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html). I've repeated some of that analysis today and here's a brief summary of the results. Note that I don't present this because I'm concerned about load on placement, we've demonstrated that placement scales pretty well. Rather, this analysis indicates that the compute node is doing redundant work which we'd prefer not to do. The compute node can't scale horizontally in the same way placement does. If offloading the work to placement and being redundant is the easiest way to avoid work on the compute node, let's do that, but that doesn't seem to be quite what's happening here. Nova uses placement mainly from two places: * The `nova-compute` nodes report resource provider and inventory to placement and make sure that the placement view of what hardware is present is accurate. * The `nova-scheduler` processes request candidates for placement, and claim resources by writing allocations to placement. There are some additional interactions, mostly associated with migrations or fixing up unusual edge cases. Since those things are rare they are sort of noise in this discussion, so left out. When a basic (where basic means no nested resource providers) compute node starts up it POSTs to create a resource provider and then PUTs to set the inventory. After that a periodic job runs, usually every 60 seconds. In that job we see the following 11 requests: GET /placement/resource_providers?in_tree=82fffbc6-572b-4db0-b044-c47e34b27ec6 GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/aggregates GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/traits GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/allocations GET /placement/resource_providers?in_tree=82fffbc6-572b-4db0-b044-c47e34b27ec6 GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/aggregates GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/traits GET /placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories A year and a half ago it was 5 requests per-cycle, but they were different requests: GET /placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates GET /placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories GET /placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/allocations GET /placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates GET /placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories The difference comes from two changes: * We no longer confirm allocations on the compute node. * We've now have things called ProviderTrees which are responsible for managing nested providers, aggregates and traits in a unified fashion. It appears, however, that we have some redundancies. We get inventories 4 times; aggregates, providers and traits 2 times, and allocations once. The `in_tree` calls happen from the report client method `_get_providers_in_tree` which is called by `_ensure_resource_provider` which can be called from multiple places, but in this case is being called both times from `get_provider_tree_and_ensure_root`, which is also responsible for two of the inventory request. `get_provider_tree_and_ensure_root` is called by `_update` in the resource tracker. `_update` is called by both `_init_compute_node` and `_update_available_resource`. Every single period job iteration. `_init_compute_node` is called from _update_available_resource` itself. That accounts for the overall doubling. The two calls inventories per group come from the following, in `get_provider_tree_and_ensure_root`: 1. `_ensure_resource_provider` in the report client calls `_refresh_and_get_inventory` for every provider in the tree (the result of the `in_tree` query) 2. Immediately after the the call to `_ensure_resource_provider` every provider in the provider tree (from `self._provider_tree.get_provider_uuids()`) then has a `_refresh_and_get_inventory` call made. In a non-sharing, non-nested scenario (such as a single node devstack, which is where I'm running this analysis) these are the exact same one resource provider. I'm insufficiently aware of what might be in the provider tree in more complex situations to be clear on what could be done to limit redundancy here, but it's a place worth looking. The requests for aggregates and traits happen via `_refresh_associations` in `_ensure_resource_provider`. The single allocation request is from the resource tracker calling `_remove_deleted_instances_allocations` checking to see if it is possible to clean up any allocations left over from migrations. ## Summary/Actions So what now? There are two avenues for potential investigation: 1. Each time `_update` is called it calls `get_provider_tree_and_ensure_root`. Can one of those be skipped while keeping the rest of `_update`? Or perhaps it is possible to avoid one of the calls to `_update` entirely? 2. Can `get_provider_tree_and_ensure_root` tries to manage inventory twice be rationalized for simple cases? I've run out of time for now, so this doesn't address the requests that happen once an instance exists. I'll get to that another time. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From aschultz at redhat.com Thu Jul 26 16:24:41 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 26 Jul 2018 10:24:41 -0600 Subject: [openstack-dev] [tripleo] FFE request for config-download-ui In-Reply-To: References: Message-ID: On Thu, Jul 26, 2018 at 2:31 AM, Jiri Tomasek wrote: > Hello, > > I would like to request a FFE for [1]. Current status of TripleO UI patches > is here [2] there are last 2 patches pending review which currently depend > on [3] which is close to land. > > [1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/ > [2] > https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui > [3] https://review.openstack.org/#/c/583293/ > Sounds good. Let's get those last two patches landed. Thanks, -Alex > Thanks > -- Jiri > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From johfulto at redhat.com Thu Jul 26 16:30:06 2018 From: johfulto at redhat.com (John Fulton) Date: Thu, 26 Jul 2018 12:30:06 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> References: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> Message-ID: On Thu, Jul 26, 2018 at 1:48 AM Cédric Jeanneret wrote: > > Hello Sam, > > Thanks for the clarifications. > > On 07/25/2018 07:46 PM, Sam Doran wrote: > > I spoke with other Ansible Core devs to get some clarity on this change. > > > > This is not a change that is being made quickly, lightly, or without a > > whole of bunch of reservation. In fact, that PR created by agaffney may > > not be merged any time soon. He just wanted to get something started and > > there is still ongoing discussion on that PR. It is definitely a WIP at > > this point. > > > > The main reason for this change is that pretty much all of the Ansible > > CVEs to date came from "fact injection", meaning a fact that contains > > executable Python code Jinja will merrily exec(). Vars, hostvars, and > > facts are different in Ansible (yes, this is confusing — sorry). All > > vars go through a templating step. By copying facts to vars, it means > > facts get templated controller side which could lead to controller > > compromise if malicious code exists in facts. > > > > We created an AnsibleUnsafe class to protect against this, but stopping > > the practice of injecting facts into vars would close the door > > completely. It also alleviates some name collisions if you set a hostvar > > that has the same name as a var. We have some methods that filter out > > certain variables, but keeping facts and vars in separate spaces is much > > cleaner. > > > > This also does not change how hostvars set via set_fact are referenced. > > (set_fact should really be called set_host_var). Variables set with > > set_fact are not facts and are therefore not inside the ansible_facts > > dict. They are in the hostvars dict, which you can reference as {{ > > my_var }} or {{ hostvars['some-host']['my_var'] }} if you need to look > > it up from a different host. > > so if, for convenience, we do this: > vars: > a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}" > > That's completely acceptable and correct, and won't create any security > issue, right? > > > > > All that being said, the setting to control this behavior as Emilien > > pointed out is inject_facts_as_vars, which defaults to True and will > > remain that way for the foreseeable future. I would not rush into > > changing all the fact references in playbooks. It can be a gradual process. > > > > Setting inject_facts_as_vars toTrue means ansible_hostname becomes > > ansible_facts.hostname. You do not have to use the hostvars dictionary — > > that is for looking up facts about hosts other than the current host. > > > > If you wanted to be proactive, you could start using the ansible_facts > > dictionary today since it is compatible with the default setting and > > will not affect others trying to use playbooks that reference ansible_facts. > > > > In other words, with the default setting of True, you can use either > > ansible_hostname or ansible_facts.hostname. Changing it to False means > > only ansible_facts.hostname is defined. > > > >> Like, really. I know we can't really have a word about that kind of > >> decision, but... damn, WHY ?! > > > > That is most certainly not the case. Ansible is developed in the open > > and we encourage community members to attend meetings > > and add > > topics to the agenda > > for discussion. > > Ansible also goes through a proposal process for major changes, which > > you can view here > > . > > > > You can always go to #ansible-devel on Freenode or start a discussion on > > the mailing list > > to speak with > > the Ansible Core devs about these things as well. > > And I also have the "Because" linked to my "why" :). big thanks! Do we have a plan for which Ansible version might be the default in upcoming TripleO versions? If this is the thread to discuss it then, I want to point out that TripleO's been using ceph-ansible for Ceph integration on the client and server side since Pike and that ceph-ansible 3.1 (which TripleO master currently uses) fails on Ansible 2.6 and that this won't be addressed until ceph-ansible 3.2. John > > Bests, > > C. > > > > > --- > > > > Respectfully, > > > > Sam Doran > > Senior Software Engineer > > Ansible by Red Hat > > sdoran at redhat.com > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Jul 26 16:32:19 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 11:32:19 -0500 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed In-Reply-To: References: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Message-ID: On 7/23/2018 4:20 AM, Slawomir Kaplonski wrote: > Thx Artom for taking care of it. Did You made any progress? > I think that it might be quite important to fix as it failed around 50 times during last 7 days: > http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22 I've proposed a Tempest change to skip that part of the test for now: https://review.openstack.org/#/c/586292/ We could revert that and link it to artom's debug patch to see if we can recreate with proper debug. -- Thanks, Matt From mriedemos at gmail.com Thu Jul 26 16:35:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 11:35:20 -0500 Subject: [openstack-dev] Lots of slow tests timing out jobs In-Reply-To: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> References: <164d030c39e.101330779109144.5025774575873163310@ghanshyammann.com> Message-ID: <75da5a99-b114-6914-7eb2-6de74ed593c0@gmail.com> On 7/25/2018 1:46 AM, Ghanshyam Mann wrote: > As per avg time, I have voted (currently based on 14 days avg) on ethercalc which all test to mark as slow. I taken the criteria of >120 sec avg time. Once we have more and more people votes there we can mark them slow. > > [3]https://ethercalc.openstack.org/dorupfz6s9qt I've made my votes for the compute-specific tests along with justification either way on each one. -- Thanks, Matt From ed at leafe.com Thu Jul 26 16:39:00 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 26 Jul 2018 11:39:00 -0500 Subject: [openstack-dev] Subject: [all][api] POST /api-sig/news Message-ID: <12AB13AF-4A00-4D0F-A45A-D7F418762BC3@leafe.com> Greetings OpenStack community, We had a short but sweet meeting today, as all four core members were around for the first time in several weeks. The one action item from last week, reaching out to the people working on the GraphQL experiment, was done, but so far we have not heard back on their progress. notmyname suggested that we investigate the IETF [7] draft proposal for Best Practices when building HTTP protocols [8] which may be relevant to our work, so we all agreed to review the document (all 30 pages of it!) by next week, where we will discuss it further. Finally, we merged two patches that had had universal approval (yes, the *entire* universe), sending cdent's stats through the roof. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Expand schema for error.codes to reflect reality https://review.openstack.org/#/c/580703/ * Add links to error-example.json https://review.openstack.org/#/c/578369/ # API Guidelines Proposed for Freeze * None # Guidelines that are ready for wider review by the whole community. * None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://ietf.org/ [8] https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis-06 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From mriedemos at gmail.com Thu Jul 26 16:42:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 11:42:01 -0500 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: References: Message-ID: On 7/25/2018 3:07 PM, Mohammed Naser wrote: > Hi everyone: > > This email is just to notify everyone on the TC and the community that > the change to remove the stable branch maintenance as a project > team[1] has been fast-tracked[2]. > > The change should be approved on 2018-07-28 however it is beneficial > to remove the stable branch team (which has been moved into a SIG) in > order for `tonyb` to be able to act as an election official. > > There seems to be no opposing votes however a revert is always > available if any members of the TC are opposed to the change[3]. > > Thanks to Tony for all of his help in the elections. > > Regards, > Mohammed > > [1]:https://review.openstack.org/#/c/584206/ > [2]:https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates > [3]:https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes First time I've heard of it...but thanks. I personally don't think calling something a SIG magically makes people appear to help out, like creating a stable maintenance official project team and PTL didn't really grow a contributor base either, but so it goes. Only question I have is will the stable:follows-policy governance tag [1] also be removed? [1] https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html -- Thanks, Matt From sean.mcginnis at gmx.com Thu Jul 26 17:00:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Jul 2018 12:00:53 -0500 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: References: Message-ID: <20180726170052.GA15608@sm-workstation> > > Only question I have is will the stable:follows-policy governance tag [1] > also be removed? > > [1] > https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html > I wouldn't think so. Nothing is changing with the policy, so it is still of interest to see which projects are following that. I don't believe the policy was tied in any way with stable being an actual project team vs a SIG. From melwittt at gmail.com Thu Jul 26 17:43:05 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 26 Jul 2018 10:43:05 -0700 Subject: [openstack-dev] [requirements][release] FFE for os-vif 1.11.1 Message-ID: <06944768-740f-12ed-db71-4d5687200b72@gmail.com> Hello, I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. The current release for rocky, 1.11.0, added a new feature: the NoOp Plugin, but it's not actually usable (it's not being loaded) because we missed adding a file to the setup.cfg. We have fixed the problem in a one liner add to setup.cfg [1] and we would like to be able to do another release 1.11.1 for rocky to include this fix. That way, the NoOp Plugin feature advertised in the release notes [2] for rocky would be usable for consumers. Cheers, -melanie [1] https://review.openstack.org/585530 [2] https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0 From prometheanfire at gentoo.org Thu Jul 26 18:01:18 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 26 Jul 2018 13:01:18 -0500 Subject: [openstack-dev] [requirements][release] FFE for os-vif 1.11.1 In-Reply-To: <06944768-740f-12ed-db71-4d5687200b72@gmail.com> References: <06944768-740f-12ed-db71-4d5687200b72@gmail.com> Message-ID: <20180726180118.f72dopljnzj67vsb@gentoo.org> On 18-07-26 10:43:05, melanie witt wrote: > Hello, > > I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. The > current release for rocky, 1.11.0, added a new feature: the NoOp Plugin, but > it's not actually usable (it's not being loaded) because we missed adding a > file to the setup.cfg. > > We have fixed the problem in a one liner add to setup.cfg [1] and we would > like to be able to do another release 1.11.1 for rocky to include this fix. > That way, the NoOp Plugin feature advertised in the release notes [2] for > rocky would be usable for consumers. > > [1] https://review.openstack.org/585530 > [2] https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0 > Yep, we talked about it in the release channel. +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ | kuryr-kubernetes | requirements.txt | 18 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | | nova | requirements.txt | 59 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | | nova-lxd | requirements.txt | 7 | os-vif!=1.8.0,>=1.9.0 # Apache-2.0 | | networking-bigswitch | requirements.txt | 6 | os-vif>=1.1.0 # Apache-2.0 | | networking-bigswitch | test-requirements.txt | 25 | os-vif>=1.1.0 # Apache-2.0 | | networking-midonet | test-requirements.txt | 40 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ All these projects would need re-releases if you plan on raising the minimum. They would also need reviews submitted individually for that. A upper-constraint only fix would not need that, but would also still allow consumers to encounter the bug, up to you to decide. LGTM otherwise. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Thu Jul 26 18:07:08 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 26 Jul 2018 14:07:08 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: References: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> Message-ID: On Thu, Jul 26, 2018 at 12:30 PM John Fulton wrote: > Do we have a plan for which Ansible version might be the default in > upcoming TripleO versions? > > If this is the thread to discuss it then, I want to point out that > TripleO's been using ceph-ansible for Ceph integration on the client > and server side since Pike and that ceph-ansible 3.1 (which TripleO > master currently uses) fails on Ansible 2.6 and that this won't be > addressed until ceph-ansible 3.2. > I think the last thing we want is to break TripleO + Ceph integration so we will maintain Ansible 2.5.x in TripleO Rocky and upgrade to 2.6.x in Stein when ceph-ansible 3.2 is used and working well. Hope it's fine for everyone, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Jul 26 18:28:39 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 26 Jul 2018 14:28:39 -0400 Subject: [openstack-dev] [tripleo] Rocky milestone 3 was released! Message-ID: Kudos to the team, we just release our third Rocky milestone! As usual, I prepared some numbers so you can see our project health: https://docs.google.com/presentation/d/1RV30OVxmXv1y_z33LuXMVB56TA54Urp7oHIoTNwrtzA/edit#slide=id.p Some comments: 1) More bugs were fixed in rocky milestone 3 than before. 2) Milestone 2 and Milestone 2 delivered the same amount of blueprints. 3) Our list of core reviewers keep growing! 4) Commits and LOC are much higher than Queens. Now the focus should be on stabilization and bug fixing, we are in release candidate mode which means no more features unless you have FFE granted. Thanks everyone for this hard work! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Jul 26 18:58:52 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 26 Jul 2018 13:58:52 -0500 Subject: [openstack-dev] [keystone] PTL Candidacy for the Stein cycle Message-ID: <58fc6c1a-975a-da17-0600-3698adce867d@gmail.com> Hey everyone, I'm writing to submit my self-nomination as keystone's PTL for the Stein release. We've made significant progress tackling some of the major goals we set for keystone in Pike. Now that we're getting close to wrapping up some of those initiatives, I'd like to continue advocating for enhanced RBAC and unified limits. I think we can do this specifically by using them in keystone, where applicable, and finalize them in Stein. While a lot of the work we tackled in Rocky was transparent to users, it paved the way for us to make strides in other areas. We focused on refactoring large chunks of code in order to reduce technical debt and traded some hand-built solutions in favor of well-known frameworks. In my opinion, these are major accomplishments that drastically simplified keystone. Because of this, it'll be easier to implement new features we originally slated for this release. We also took time to smooth out usability issues with unified limits and implemented support across clients and libraries. This is going to help services consume keystone's unified limits implementation early next release. Additionally, I'd like to take some time in Stein to focus on the next set of challenges and where we'd like to take keystone in the future. One area that we haven't really had the bandwidth to focus on is federation. From Juno to Ocata there was a consistent development focus on supporting federated deployments, resulting in a steady stream of features or improvements. Conversely, I think having a break from constant development will help us approach it with a fresh perspective. In my opinion, federation improvements are a timely thing to work on given the use-cases that have been cropping up in recent summits and PTGs. Ideally, I think it would great to come up with an actionable plan for making federation easier to use and a first-class tested citizen of keystone. Finally, I'll continue to place utmost importance on assisting other services in how they consume and leverage the work we do. Thanks for taking a moment to read what I have to say and I look forward to catching up in Denver. Lance -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Thu Jul 26 19:06:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 14:06:01 -0500 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: <20180726170052.GA15608@sm-workstation> References: <20180726170052.GA15608@sm-workstation> Message-ID: On 7/26/2018 12:00 PM, Sean McGinnis wrote: > I wouldn't think so. Nothing is changing with the policy, so it is still of > interest to see which projects are following that. I don't believe the policy > was tied in any way with stable being an actual project team vs a SIG. OK, then maybe as a separate issue, I would argue the tag is not maintained and therefore useless at best, or misleading at worst (for those projects that don't have it) and therefore should be removed. Who doesn't not agree with me?! -- Thanks, Matt From skaplons at redhat.com Thu Jul 26 19:25:50 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 26 Jul 2018 21:25:50 +0200 Subject: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed In-Reply-To: References: <6AEB5700-BBCA-46C3-9A48-83EC7CC92475@redhat.com> Message-ID: <45AE5455-0DFC-4509-B095-861069870E6C@redhat.com> Thx :) > Wiadomość napisana przez Matt Riedemann w dniu 26.07.2018, o godz. 18:32: > > On 7/23/2018 4:20 AM, Slawomir Kaplonski wrote: >> Thx Artom for taking care of it. Did You made any progress? >> I think that it might be quite important to fix as it failed around 50 times during last 7 days: >> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22 > > I've proposed a Tempest change to skip that part of the test for now: > > https://review.openstack.org/#/c/586292/ > > We could revert that and link it to artom's debug patch to see if we can recreate with proper debug. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From melwittt at gmail.com Thu Jul 26 20:07:47 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 26 Jul 2018 13:07:47 -0700 Subject: [openstack-dev] [requirements][release] FFE for os-vif 1.11.1 In-Reply-To: <20180726180118.f72dopljnzj67vsb@gentoo.org> References: <06944768-740f-12ed-db71-4d5687200b72@gmail.com> <20180726180118.f72dopljnzj67vsb@gentoo.org> Message-ID: On Thu, 26 Jul 2018 13:01:18 -0500, Matthew Thode wrote: > On 18-07-26 10:43:05, melanie witt wrote: >> Hello, >> >> I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. The >> current release for rocky, 1.11.0, added a new feature: the NoOp Plugin, but >> it's not actually usable (it's not being loaded) because we missed adding a >> file to the setup.cfg. >> >> We have fixed the problem in a one liner add to setup.cfg [1] and we would >> like to be able to do another release 1.11.1 for rocky to include this fix. >> That way, the NoOp Plugin feature advertised in the release notes [2] for >> rocky would be usable for consumers. >> >> [1] https://review.openstack.org/585530 >> [2] https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0 >> > > Yep, we talked about it in the release channel. > > +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ > | kuryr-kubernetes | requirements.txt | 18 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | > | nova | requirements.txt | 59 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | > | nova-lxd | requirements.txt | 7 | os-vif!=1.8.0,>=1.9.0 # Apache-2.0 | > | networking-bigswitch | requirements.txt | 6 | os-vif>=1.1.0 # Apache-2.0 | > | networking-bigswitch | test-requirements.txt | 25 | os-vif>=1.1.0 # Apache-2.0 | > | networking-midonet | test-requirements.txt | 40 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 | > +----------------------------------------+--------------------------------------------------------------------+------+------------------------------------+ > > All these projects would need re-releases if you plan on raising the > minimum. They would also need reviews submitted individually for that. > A upper-constraint only fix would not need that, but would also still > allow consumers to encounter the bug, up to you to decide. > LGTM otherwise. We don't need to raise the minimum -- this will just be a small update to fix the existing 1.11.0 release. Thanks! -melanie From ashlee at openstack.org Thu Jul 26 21:12:19 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Thu, 26 Jul 2018 16:12:19 -0500 Subject: [openstack-dev] OpenStack Summit Berlin - Community Voting Closing Soon Message-ID: Hi everyone, Session voting for the Berlin Summit closes in less than 8 hours! Submit your votes by July 26 at 11:59pm Pacific Time (Friday, July 27 at 6:59 UTC). VOTE HERE The Programming Committees will ultimately determine the final schedule. Community votes are meant to help inform the decision, but are not considered to be the deciding factor. The Programming Committee members exercise judgment in their area of expertise and help ensure diversity. View full details of the session selection process here. Continue to visit https://www.openstack.org/summit/berlin-2018 for all Summit-related information. REGISTER Register for the Summit for $699 before prices increase after August 21 at 11:59pm Pacific Time (August 22 at 6:59am UTC). VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM August 30 is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (August 31 at 6:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Jul 26 21:37:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Jul 2018 16:37:21 -0500 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: References: <20180726170052.GA15608@sm-workstation> Message-ID: <20180726213720.GA3106@sm-workstation> On Thu, Jul 26, 2018 at 02:06:01PM -0500, Matt Riedemann wrote: > On 7/26/2018 12:00 PM, Sean McGinnis wrote: > > I wouldn't think so. Nothing is changing with the policy, so it is still of > > interest to see which projects are following that. I don't believe the policy > > was tied in any way with stable being an actual project team vs a SIG. > > OK, then maybe as a separate issue, I would argue the tag is not maintained > and therefore useless at best, or misleading at worst (for those projects > that don't have it) and therefore should be removed. > I'd be curious to hear more about why you don't think that tag is maintained. For projects that assert they follow stable policy, in the relase process we have extra scrutiny that nothing is being released on stable branches that would appear to violate the stable policy. Granted, we need to base most of that evaluation on the commit messages, so it's certainly possible to phrase something in a misleading way that would not raise any red flags for stable compliance, but if that happens, I would think it would be unintentional and rare. From sean.mcginnis at gmx.com Fri Jul 27 00:13:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Jul 2018 19:13:44 -0500 Subject: [openstack-dev] [release] Release countdown for week R-4, July 30 - August 3 In-Reply-To: <20180726122200.GA28007@sm-workstation> References: <20180726122200.GA28007@sm-workstation> Message-ID: <20180727001344.GA12052@sm-workstation> As the client deadline and milestone 3 day winds down here, I wanted to do a quick check on where things stand before calling it a day. This is according to script output, so I haven't actually looked into any details so far. But according to the script, the follow cycle-with-intermediary deliverables have not had a release done for rocky yet: aodh bifrost ceilometer cloudkitty-dashboard cloudkitty ec2-api ironic-python-agent karbor-dashboard karbor kuryr-kubernetes kuryr-libnetwork magnum-ui magnum masakari-dashboard monasca-kibana-plugin monasca-log-api monasca-notification networking-hyperv panko python-cloudkittyclient python-designateclient python-karborclient python-magnumclient python-pankoclient python-searchlightclient python-senlinclient python-tricircleclient sahara-tests senlin-dashboard tacker-horizon tacker zun-ui zun Just a reminder that we will need to force a release on these in order to get a final point to branch stable/rocky. Taking a look at ones that have done a release but have had more unreleased commits since then, I'm also seeing several python-*client deliverables that may be missing final releases. Thanks, Sean On Thu, Jul 26, 2018 at 07:22:01AM -0500, Sean McGinnis wrote: > > General Information > ------------------- > > For deliverables following the cycle-with-milestones model, we are now (after > the day I send this) past Feature Freeze. The focus should be on determining > and fixing release-critical bugs. At this stage only bugfixes should be > approved for merging in the master branches: feature work should only be > considered if explicitly granted a Feature Freeze exception by the team PTL > (after a public discussion on the mailing-list). > > StringFreeze is now in effect, in order to let the I18N team do the translation > work in good conditions. The StringFreeze is currently soft (allowing > exceptions as long as they are discussed on the mailing-list and deemed worth > the effort). It will become a hard StringFreeze on 9th of August along with the > RC. > > The requirements repository is also frozen, until all cycle-with-milestones > deliverables have produced a RC1 and have their stable/rocky branches. If > release critical library or client library releases are needed for Rocky past > the freeze dates, you must request a Feature Freeze Exception (FFE) from the > requirements team before we can do a new release to avoid having something > released in Rocky that is not actually usable. This is done by posting to the > openstack-dev mailing list with a subject line similar to: > > [$PROJECT][requirements] FFE requested for $PROJECT_LIB > > Include justification/reasoning for why a FFE is needed for this lib. If/when > the requirements team OKs the post-freeze update, we can then process a new > release. Including a link to the FFE in the release request is not required, > but would be helpful in making sure we are clear to do a new release. > > Note that deliverables that are not tagged for release by the appropriate > deadline will be reviewed to see if they are still active enough to stay on the > official project list. > From liliueecg at gmail.com Fri Jul 27 01:43:17 2018 From: liliueecg at gmail.com (Li Liu) Date: Thu, 26 Jul 2018 21:43:17 -0400 Subject: [openstack-dev] [cyborg] PTL Candidacy for Stein cycle Message-ID: I'd like to nominate myself for the Cyborg PTL role for the Stein cycle. Thank you Howard for starting this new project in the community couple year ago. He led the project the beginning and helped the project ramping up on the right track. Now the project is in a fanatic state after a couple releases preparation. We had our first official release from Q and continues to deliver great features in R and S releases. Our team is growing fast, people are showing interests in the project across different domains from the industry. We took it in our pride that cyborg is one of the few projects that is grown entirely in the OpenStack community from the very beginning: no vendor code dump, design discussion from scratch, write every bit of code from zero. I joined the project not too long ago, but I am already so fascinated by being in such a great team and knowing the code we write can help others around the world. In Rocky, we added further support for FPGAs, e.g. bitstream programming APIs, metadata bitstream standardization. We also finalized Nova-Cyborg interaction spec and start working with Placement folks to make things happen. In addition, we have more device drivers supports (GPUs, Intel/Xilinx FPGAs, etc.) Looking forward in Stein Cycle, here is a list of things we will try to accomplish: 1. Finish and polish up the interaction with Nova through placement API 2. FInish Implementing os-acc library 3. Complete the E2E flow of doing acc scheduling, initializing, as well as FPGA programming 4. Work with the k8s community to provide containerization support for Kubernetes DPI plugin. 5. Work with Berkely RISC-V team to port their projects over to the OpenStack ecosystem(e.g. FireSim) -- Thank you Regards Li Liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Jul 27 03:54:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Jul 2018 22:54:52 -0500 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: <20180726213720.GA3106@sm-workstation> References: <20180726170052.GA15608@sm-workstation> <20180726213720.GA3106@sm-workstation> Message-ID: <97c24bd9-1d3b-927e-f89c-45e12a4d7164@gmail.com> On 7/26/2018 4:37 PM, Sean McGinnis wrote: > I'd be curious to hear more about why you don't think that tag is maintained. Are projects actively applying for the tag? > > For projects that assert they follow stable policy, in the relase process we > have extra scrutiny that nothing is being released on stable branches that > would appear to violate the stable policy. Is this automated somehow and takes the tag specifically into account, e.g. some kind of validation that for projects with the tag, a release on a stable branch doesn't have something like "blueprint" in the commit message? Or is that just manual code review of the change log? -- Thanks, Matt From tony at bakeyournoodle.com Fri Jul 27 05:42:37 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 27 Jul 2018 15:42:37 +1000 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: References: Message-ID: <20180727054236.GK30070@thor.bakeyournoodle.com> On Thu, Jul 26, 2018 at 11:42:01AM -0500, Matt Riedemann wrote: > On 7/25/2018 3:07 PM, Mohammed Naser wrote: > > Hi everyone: > > > > This email is just to notify everyone on the TC and the community that > > the change to remove the stable branch maintenance as a project > > team[1] has been fast-tracked[2]. > > > > The change should be approved on 2018-07-28 however it is beneficial > > to remove the stable branch team (which has been moved into a SIG) in > > order for `tonyb` to be able to act as an election official. > > > > There seems to be no opposing votes however a revert is always > > available if any members of the TC are opposed to the change[3]. > > > > Thanks to Tony for all of his help in the elections. > > > > Regards, > > Mohammed > > > > [1]:https://review.openstack.org/#/c/584206/ > > [2]:https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates > > [3]:https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes > > First time I've heard of it... http://lists.openstack.org/pipermail/openstack-dev/2018-July/132369.html > but thanks. I personally don't think calling > something a SIG magically makes people appear to help out, like creating a > stable maintenance official project team and PTL didn't really grow a > contributor base either, but so it goes. I'm not expecting magic to happen but, I think a SIG is a better fit. Since Dublin we've had Elod Illes appear and do good things so perhaps there is hope[1]! > Only question I have is will the stable:follows-policy governance tag [1] > also be removed? That wasn't on the cards, it's still the same gerrit group that is expected to approve (or not) new applications. Yours Tony. [1] Hope is not a strategy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Fri Jul 27 05:49:08 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 27 Jul 2018 15:49:08 +1000 Subject: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team In-Reply-To: <97c24bd9-1d3b-927e-f89c-45e12a4d7164@gmail.com> References: <20180726170052.GA15608@sm-workstation> <20180726213720.GA3106@sm-workstation> <97c24bd9-1d3b-927e-f89c-45e12a4d7164@gmail.com> Message-ID: <20180727054908.GL30070@thor.bakeyournoodle.com> On Thu, Jul 26, 2018 at 10:54:52PM -0500, Matt Riedemann wrote: > On 7/26/2018 4:37 PM, Sean McGinnis wrote: > > I'd be curious to hear more about why you don't think that tag is maintained. > > Are projects actively applying for the tag? > > > > > For projects that assert they follow stable policy, in the relase process we > > have extra scrutiny that nothing is being released on stable branches that > > would appear to violate the stable policy. > > Is this automated somehow and takes the tag specifically into account, e.g. > some kind of validation that for projects with the tag, a release on a > stable branch doesn't have something like "blueprint" in the commit message? > Or is that just manual code review of the change log? Manual review of the changelog. For project that assert the tag the list-changes job prints a big banner to get the attention of the release managers[1]. Those reviews need a +2 from me (or Alan) *and* a +2 from a release manager. I look at the commit messages and where thing look 'interesting' I go do code reviews on the backport changes. It isn't ideal but IMO it's far from unmaintained. If you had ideas on automation we could put in place to make this more robust, without getting in the way I'm all ears[2] Yours Tony. [1] http://logs.openstack.org/42/586242/1/check/releases-tox-list-changes/af61e24/job-output.txt.gz#_2018-07-26_15_30_07_144206 [2] Well not literally but I am listening ;P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gergely.csatari at nokia.com Fri Jul 27 07:06:45 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 27 Jul 2018 07:06:45 +0000 Subject: [openstack-dev] [edge][glance]: Image handling in edge environment In-Reply-To: References: Message-ID: Hi, The meeting will take place on 2018.08.01 18-19h CET. Here I attach the invitation. Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Friday, July 20, 2018 1:32 PM To: 'edge-computing' ; 'OpenStack Development Mailing List (not for usage questions)' Cc: 'jokke' Subject: RE: [edge][glance]: Image handling in edge environment Hi, We figured out with Jokke two timeslots what would be okay for both of us for this common meeting. Please, other interested parties give your votes to here: https://doodle.com/poll/9rfcb8aavsmybzfu I will evaluate the results and fix the time on 25.07.2018 12h CET. Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, July 18, 2018 10:02 AM To: 'edge-computing' >; OpenStack Development Mailing List (not for usage questions) > Subject: [edge][glance]: Image handling in edge environment Hi, We had a great Forum session about image handling in edge environment in Vancouver [1]. As one outcome of the session I've created a wiki with the mentioned architecture options [1]. During the Edge Working Group [3] discussions we identified some questions (some of them are in the wiki, some of them are in mails [4]) and also I would like to get some feedback on the analyzis in the wiki from people who know Glance. I think the best would be to have some kind of meeting and I see two options to organize this: * Organize a dedicated meeting for this * Add this topic as an agenda point to the Glance weekly meeting Please share your preference and/or opinion. Thanks, Gerg0 [1]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [3]: https://wiki.openstack.org/wiki/Edge_Computing_Group [4]: http://lists.openstack.org/pipermail/edge-computing/2018-June/000239.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: "Csatari, Gergely (Nokia - HU/Budapest)" Subject: Image handling in edge environment Date: Fri, 27 Jul 2018 06:58:32 +0000 Size: 15935 URL: From d.krol at samsung.com Fri Jul 27 07:47:31 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Fri, 27 Jul 2018 09:47:31 +0200 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: <20180726144850.GA4574@sm-workstation> References: <20180726144850.GA4574@sm-workstation> Message-ID: <20180727074731eucms1p384f2e07d3a9d6745f28554173e66a3c1@eucms1p3> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From jistr at redhat.com Fri Jul 27 07:57:31 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 27 Jul 2018 09:57:31 +0200 Subject: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE) Message-ID: <8aa14f14-20f9-93cc-c5cc-8a45c3b4846a@redhat.com> Hi folks, i want to raise attention on remaining patches that are needed to prevent losing Ceph updates/upgrades in Rocky [1], basically making the Ceph upgrade mechanism compatible with config-download. I'd call this a semi-FFE, as a few of the patches have characteristics of feature work, but at the same time i don't believe we can afford having Ceph unupgradable in Rocky, so it has characteristics of a regression bug too. I reported a bug [2] and tagged the patches in case we end up having to do backports. Please help with reviews and landing the patches if possible. It would have been better to focus on this earlier in the cycle, but majority of Upgrades squad work is exactly this kind of semi-FFE -- nontrivial in terms of effort required, but at the same time it's not something we can realistically slip into the next release, because it would be a regression. This sort of work tends to steal some of our focus in N cycle and direct it towards N-1 release (or even older). However, i think we've been gradually catching up with the release cycle lately, and increased focus on keeping update/upgrade CI green helps us catch breakages before they land and saves some person-hours, so i'm hoping the future is bright(er) on this. Thanks and have a good day, Jirka [1] https://review.openstack.org/#/q/topic:external-update-upgrade [2] https://bugs.launchpad.net/tripleo/+bug/1783949 From ekuvaja at redhat.com Fri Jul 27 10:43:14 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 27 Jul 2018 11:43:14 +0100 Subject: [openstack-dev] [glance] FFE for multihash In-Reply-To: References: Message-ID: On Thu, Jul 26, 2018 at 3:28 PM, Brian Rosmaita wrote: > I'm asking for a Feature Freeze Exception for the glance-side work for > the Secure Hash Algorithm Support (multihash) feature [0]. The work > is underway and should be completed early next week. > > cheers, > brian > > [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multihash.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev As agreed on the Weekly meeting yesterday; this work is well on it's way, the glance_store and python-glanceclient bits have been merged and released; this change was agreed for FFE. Thanks, Erno jokke Kuvaja From ekuvaja at redhat.com Fri Jul 27 10:44:51 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 27 Jul 2018 11:44:51 +0100 Subject: [openstack-dev] [glance] FFE for multi-backend In-Reply-To: References: Message-ID: On Thu, Jul 26, 2018 at 3:35 PM, Abhishek Kekane wrote: > I'm asking for a Feature Freeze Exception for Multiple backend support > (multi-store) > feature [0]. The only remaining work is a versioning patch to flag this > feature as > experimental and should be completed early next week. > > [0] > https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multi-store.html > > Patches open for review: > > https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:bp/multi-store > > > > Th > anks & Best Regards, > > Abhishek Kekane > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > As agreed on weekly meeting yesterday, this change is just pending prerequisites to merge so it can be released as EXPERIMENTAL API, approved for FFE. Thanks, Erno jokke Kuvaja From strigazi at gmail.com Fri Jul 27 11:35:52 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Fri, 27 Jul 2018 13:35:52 +0200 Subject: [openstack-dev] [magnum] PTL Candidacy for Stein Message-ID: Hello OpenStack community! I would like to nominate myself as PTL for the Magnum project for the Stein cycle. In the last cycle magnum became more stable and is reaching the point of becoming a feature complete solution for providing managed container clusters for private or public OpenStack clouds. Also during this cycle the community around the project became healthy and more sustainable. My goals for Stein are to: - complete the work in cluster upgrades and cluster healing - keep up with the latest release of Kubernetes and Docker in stable branches and improve their release process - documenation for cloud operators improvements - continue on building the community which supports the project Thanks for your time, Spyros strigazi on Freenode [0] https://review.openstack.org/#/c/586516/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jul 27 11:48:44 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 27 Jul 2018 07:48:44 -0400 Subject: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE) In-Reply-To: <8aa14f14-20f9-93cc-c5cc-8a45c3b4846a@redhat.com> References: <8aa14f14-20f9-93cc-c5cc-8a45c3b4846a@redhat.com> Message-ID: On Fri, Jul 27, 2018 at 3:58 AM Jiří Stránský wrote: > I'd call this a semi-FFE, as a few of the patches have characteristics of > feature work, > but at the same time i don't believe we can afford having Ceph > unupgradable in Rocky, so it has characteristics of a regression bug > too. I reported a bug [2] and tagged the patches in case we end up > having to do backports. > Right, let's consider it as a bug and not a feature. Also, it's upgrade related so it's top-priority as we did in prior cycles. Therefore I think it's fine. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Jul 27 12:37:43 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 27 Jul 2018 06:37:43 -0600 Subject: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE) In-Reply-To: References: <8aa14f14-20f9-93cc-c5cc-8a45c3b4846a@redhat.com> Message-ID: On Fri, Jul 27, 2018 at 5:48 AM, Emilien Macchi wrote: > > > On Fri, Jul 27, 2018 at 3:58 AM Jiří Stránský wrote: >> >> I'd call this a semi-FFE, as a few of the patches have characteristics of >> feature work, >> but at the same time i don't believe we can afford having Ceph >> unupgradable in Rocky, so it has characteristics of a regression bug >> too. I reported a bug [2] and tagged the patches in case we end up >> having to do backports. > > > Right, let's consider it as a bug and not a feature. Also, it's upgrade > related so it's top-priority as we did in prior cycles. Therefore I think > it's fine. I second this. We must be able to upgrade so this needs to be addressed. > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Fri Jul 27 13:07:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 27 Jul 2018 14:07:19 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-30 Message-ID: HTML: https://anticdent.org/placement-update-18-30.html This is placement update 18-30, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). # Most Important This week is feature freeze for the Rocky cycle, so the important stuff is watching already approved code to make sure it actually merges, bug fixes and testing. # What's Changed At yesterday's meeting it was decided the pending work on the /reshaper will be punted to early Stein. Though the API level is nearly ready, the code that exercises it from the nova side is very new and the calculus of confidence, review bandwidth and gate slowness works against doing an FFE. Some references: * * Meanwhile, pending work to get the report client using consumer generations is also on hold: * As far as I understand it no progress has been made on "Effectively managing nested and shared resource providers when managing allocations (such as in migrations)." Some functionality has merged recently: * Several changes to make the placement functional tests more placement oriented (use placement context, not be based on nova.test.TestCase). * Add 'nova-manage placement sync_aggregates' * Consumer generation is being used in heal allocations CLI * Allocations schema no longer allows extra fields * The report client is more robust about checking and retrying provider generations. * If force_hosts or force_nodes is being used, don't set a limit when requesting allocation candidates. # Questions I wrote up some analysis of the way the [resource tracker talks to placement](https://anticdent.org/novas-use-of-placement.html). It identifies some redundancies. Actually it reinforces that some redundancies we've known about are still there. Fixing some of these things might count as bug fixes. What do you think? # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 14, -1 from last week. * [In progress placement bugs](https://goo.gl/vzGGDQ) 13, -2 on last week. # Main Themes ## Documentation Now that we are feature frozen we better document all the stuff. And more than likely we'll find some bugs while doing that documenting. This is a section for reminding us to document all the fun stuff we are enabling. Open areas include: * "How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?!", says an enthusiastic Matt Riedemann. * The whens and wheres of re-shaping and VGPUs. * Please add more here by responding to this email. ## Consumer Generations These are in place on the placement side. There's pending work on the client side, and a semantic fix on the server side, but neither are going to merge this cycle. * return 404 when no consumer found in allocs * Use placement 1.28 in scheduler report client (1.28 is consumer gens) ## Reshape Provider Trees On hold, but still in progress as we hope to get it merged as soon as there is an opportunity to do so: It's all at: ## Mirror Host Aggregates The command line tool merged, so this is done. It allows aggregate-based limitation of allocation candidates, a nice little feature that will speed things up for people. ## Extraction I wrote up a second [blog post](https://anticdent.org/placement-extraction-2.html) on some of the issues associated with placement extraction. There are several topics on the [PTG etherpad](https://etherpad.openstack.org/p/nova-ptg-stein) related to extraction. # Other Since we're at feature freeze I'm going to only include things in the list that were already there and that might count as bug fixes or potentially relevant for near term review. So: 11, down from 29. * Add unit test for non-placement resize * Use placement.inventory.inuse in report client * [placement] api-ref: add traits parameter * Convert 'placement_api_docs' into a Sphinx extension * Add placement.concurrent_udpate to generation pre-checks * Delete allocations when it is re-allocated (This is addressing a TODO in the report client) * local disk inventory reporting related * Delete orphan compute nodes before updating resources * Remove Ocata comments which expires now * Ignore some updates from virt driver * Docs: Add Placement to Nova system architecture # End Lots to review, test, and document. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tribecca at tribecc.us Fri Jul 27 14:28:18 2018 From: tribecca at tribecc.us (T. Nichole Williams) Date: Fri, 27 Jul 2018 09:28:18 -0500 Subject: [openstack-dev] [magnum] PTL Candidacy for Stein In-Reply-To: References: Message-ID: <4F23D0B6-127F-443A-A64C-2744F16248CF@tribecc.us> +1, you’ve got my vote :D T. Nichole Williams tribecca at tribecc.us > On Jul 27, 2018, at 6:35 AM, Spyros Trigazis wrote: > > Hello OpenStack community! > > I would like to nominate myself as PTL for the Magnum project for the > Stein cycle. > > In the last cycle magnum became more stable and is reaching the point > of becoming a feature complete solution for providing managed container > clusters for private or public OpenStack clouds. Also during this cycle > the community around the project became healthy and more sustainable. > > My goals for Stein are to: > - complete the work in cluster upgrades and cluster healing > - keep up with the latest release of Kubernetes and Docker in stable > branches and improve their release process > - documenation for cloud operators improvements > - continue on building the community which supports the project > > Thanks for your time, > Spyros > > strigazi on Freenode > > [0] https://review.openstack.org/#/c/586516/ __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Jul 27 14:43:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Jul 2018 09:43:59 -0500 Subject: [openstack-dev] [cinder] about block device driver In-Reply-To: <20180716092027.pc43radmozdgndd5@localhost> References: <20180716092027.pc43radmozdgndd5@localhost> Message-ID: On 7/16/2018 4:20 AM, Gorka Eguileor wrote: > If I remember correctly the driver was deprecated because it had no > maintainer or CI. In Cinder we require our drivers to have both, > otherwise we can't guarantee that they actually work or that anyone will > fix it if it gets broken. Would this really require 3rd party CI if it's just local block storage on the compute node (in devstack)? We could do that with an upstream CI job right? We already have upstream CI jobs for things like rbd and nfs. The 3rd party CI requirements generally are for proprietary storage backends. I'm only asking about the CI side of this, the other notes from Sean about tweaking the LVM volume backend and feature parity are good reasons for removal of the unmaintained driver. Another option is using the nova + libvirt + lvm image backend for local (to the VM) ephemeral disk: https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653 -- Thanks, Matt From sdoran at redhat.com Fri Jul 27 14:52:38 2018 From: sdoran at redhat.com (Sam Doran) Date: Fri, 27 Jul 2018 10:52:38 -0400 Subject: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions In-Reply-To: <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> References: <59716157-D28C-4DA8-89EC-0E98E8072153@redhat.com> <159c9b6c-077a-6328-d4f7-fde9664a3571@redhat.com> Message-ID: <5B28CDBA-B12B-4FB1-BDBD-F34BA9D1D54E@redhat.com> > so if, for convenience, we do this: > vars: > a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}" > > That's completely acceptable and correct, and won't create any security > issue, right? Yes, that will work, but you don't need to use the hostvars dict. You can simply use ansible_facts.mounts. Using facts in no way creates security issues. The attack vector is a managed node setting local facts, or a malicious playbook author setting a fact that contains executable and malicious code. Ansible uses an UnsafeProxy class to ensure text from untrusted sources is properly handled to defend against this. > I think the last thing we want is to break TripleO + Ceph integration so we will maintain Ansible 2.5.x in TripleO Rocky and upgrade to 2.6.x in Stein when ceph-ansible 3.2 is used and working well. This sounds like a good plan. --- Respectfully, Sam Doran Senior Software Engineer Ansible by Red Hat sdoran at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Jul 27 15:27:14 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 27 Jul 2018 10:27:14 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 23 July 2018 Message-ID: <784e10af-c1b4-dfdb-ef28-e21f6b9a60de@gmail.com> # Keystone Team Update - Week of 23 July 2018 ## News This week wrapped up rocky-3, but the majority of the things working through review are refactors that aren't necessarily susceptible to the deadline. ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 32 changes this week, including the remaining patches for implementing strict two-level hierarchical limits (server and client support), Flask work, and a security fix. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 47 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. There are still a lot of patches that need attention, specifically the work to start converting keystone APIs to consume Flask. These changes should be transparent to end users, but if you have questions about the approach or specific reviews, please come ask in #openstack-keystone. Kristi also has a patch up to implement the mutable config goal for keystone [0]. This work was dependent on Flask bits that merged earlier this week, but based on a discussion with the TC we've already missed the deadline [1]. Reviews here would still be appreciated because it should help us merge the implementation early in Stein. [0] https://review.openstack.org/#/c/585417/ [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-27.log.html#t2018-07-27T15:03:49 ## Bugs This week we opened 6 new bugs and fixed 2. The highlight here is a security bug that was fixed and backported to all supported releases [0]. [0] https://bugs.launchpad.net/keystone/+bug/1779205 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html At this point we're past the third milestone, meaning requirements are frozen and we're in a soft string freeze. Please be aware of those things when reviewing patch sets. The next deadline for us is RC target on August 10th. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From james.page at canonical.com Fri Jul 27 16:09:04 2018 From: james.page at canonical.com (James Page) Date: Fri, 27 Jul 2018 17:09:04 +0100 Subject: [openstack-dev] [charms] PTL non-candidacy for Stein cycle Message-ID: Hi All I won't be standing for PTL of OpenStack Charms for this upcoming cycle. Its been my pleasure to have been PTL since the project was accepted into OpenStack, but its time to let someone else take the helm. I'm not going anywhere but expect to have a bit of a different focus for this cycle (at least). Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Jul 27 16:41:22 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 27 Jul 2018 12:41:22 -0400 Subject: [openstack-dev] [tripleo] network isolation can't find files referred to on director In-Reply-To: References: Message-ID: On Thu, Jul 26, 2018 at 4:58 AM, Samuel Monderer wrote: > Hi James, > > I understand the network-environment.yaml will also be generated. > What do you mean by rendered path? Will it be > "usr/share/openstack-tripleo-heat-templates/network/ports/"? Yes, the rendered path is the path that the jinja2 templating process creates. > By the way I didn't find any other place in my templates where I refer to > these files? > What about custom nic configs is there also a jinja2 process to create them? No. custom nic configs are by definition, custom to the environment you are deploying. Only you know how to properly define what newtork configurations needs applying. Our sample nic configs are generated from jinja2 now. For example: tripleo-heat-templates/network/config/single-nic-vlans/role.role.j2.yaml If you wanted to follow that pattern such that your custom nic config templates were generated, you could do that From s at cassiba.com Fri Jul 27 17:39:36 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 27 Jul 2018 10:39:36 -0700 Subject: [openstack-dev] [chef] PTL candidacy for Stein Message-ID: Howdy! I am submitting my name to continue as PTL for Chef OpenStack. If you don't know me, I am scas on Freenode. I work for Workday, where I am an active operator and upstream developer. I have contributed to OpenStack since 2014, and joined the Chef core team in early 2015. Since then, I have served as PTL for four cycles. I am also an active member of the Sous-Chefs organization, which fosters maintainership of community Chef cookbooks that could no longer be maintained by their author(s). My life as a triple threat, as well as being largely in the deploy automation space, gives me a unique perspective on the use cases for Chef OpenStack. Development continues to run about a release behind the coordinated release to stabilize due to contributor availability. In that time, overall testing has improved to raise the overall testing confidence in landing more aggressive changes. Local testing infrastructure tends to run closer to trunk to keep a pulse on how upstream changes will affect the cookbooks closer to review time. This, in turn, influences the changes that do pass the sniff test. For Stein, I would like to focus on some of the efforts started during Rocky. * Awareness and Community Chef OpenStack is extremely powerful and flexible, but it is not easy for new contributors to get involved. That is, if they can find it, down the dark alley, through the barber shop, and behind the door with a secret knock. Documentation has been a handful of terse Markdown docs and READMEs that do not evolve as fast as the code, which I think impacts visibility and artificially creates a barrier to entry. I would like to place more emphasis on providing this more well-lit entry point for new and existing users alike. * Consistency and HA Stability is never a given, but it is pretty close with Chef OpenStack. Each change runs through multiple, iterative tests before it hits Gerrit. However, not every change runs through those same tests in the gate due to the gap between local and integration. This natural gap has resulted in multiple chef-client versions and OpenStack configurations testing each change. There have existed HA primitives in the cookbooks for years, but there are no published working examples. I am aiming to continue this effort to further reducing the human element in executing the tests. * Continued work on containerization With efforts to deploy OpenStack in the context of containers, Chef OpenStack has not shared in the fanfare. I shipped a very shaky dokken support out of a hack day at the 2017 Chef Community Summit in Seattle, and have refined it over time to where it's consistently Doing A Thing. I have found regressions upstream (e.g. packaging), and have conservatively implemented workarounds to coax things into submission when the actual fix would take more months to land. I wish to continue that effort, and expand to other Ansible-based and Kitchen-based integration scenarios to provide examples of how to get to OpenStack using Chef. These are but some of my personal goals and aspirations. I hope to be able to make progress on them all, but reality may temper those aspirations. I would love to connect with more new users and contributors. You can reach out to me directly, or find me in #openstack-chef. Thanks! -scas From mriedemos at gmail.com Fri Jul 27 19:06:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Jul 2018 14:06:57 -0500 Subject: [openstack-dev] [nova] [placement] placement update 18-30 In-Reply-To: References: Message-ID: On 7/27/2018 8:07 AM, Chris Dent wrote: > # Questions > > I wrote up some analysis of the way the [resource tracker talks to > placement](https://anticdent.org/novas-use-of-placement.html). It > identifies some redundancies. Actually it reinforces that some > redundancies we've known about are still there. Fixing some of these > things might count as bug fixes. What do you think? Performance issues are definitely bugs so I think that's fair. How big of an impact the solution is is another thing. > > * "How to deploy / model shared disk. Seems fairly straight-forward, >     and we could even maybe create a multi-node ceph job that does >     this - wouldn't that be awesome?!?!", says an enthusiastic Matt >     Riedemann. Two updates here: 1. We've effectively disabled the shared storage provider stuff in the libvirt driver: https://bugs.launchpad.net/nova/+bug/1784020 Because of the reasons listed in the bug. That's going to require a spec in Stein if we're going to fully support shared storage providers and the work items from that bug would be a good start for a spec. 2. Coincidentally, I *just* got a working ceph (single-node) CI job run working with a shared storage provider providing DISK_GB for the single compute node provider: https://review.openstack.org/#/c/586363/ Fleshing that out for a multi-node job shouldn't be too hard. All of that is now entered in the Stein PTG etherpad for discussion in Denver. > > * The whens and wheres of re-shaping and VGPUs. I'm not sure anything about this has to be documented for Rocky since we didn't get /reshaper done so nothing regarding VGPUs in nova changed, right? Except I think Sylvain fixed one VGPU gap in the libvirt driver which was updated in the docs, but unrelated to /reshaper. -- Thanks, Matt From mriedemos at gmail.com Fri Jul 27 19:14:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Jul 2018 14:14:10 -0500 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: On 7/25/2018 4:44 AM, Ghanshyam Mann wrote: > From checking the history and review discussion on [3], it seems that it was like that from staring. key_pair quota is being counted when actually creating the keypair but it is not shown in API 'in_use' field. Just so I'm clear which API we're talking about, you mean there is no totalKeypairsUsed entry in https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits correct? -- Thanks, Matt From mriedemos at gmail.com Fri Jul 27 19:20:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Jul 2018 14:20:01 -0500 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <5B58B6AE.1@windriver.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B58B6AE.1@windriver.com> Message-ID: On 7/25/2018 12:43 PM, Chris Friesen wrote: > Keypairs are weird in that they're owned by users, not projects.  This > is arguably wrong, since it can cause problems if a user boots an > instance with their keypair and then gets removed from a project. > > Nova microversion 2.54 added support for modifying the keypair > associated with an instance when doing a rebuild.  Before that there was > no clean way to do it. While discussing what eventually became microversion 2.54, sdague sent a nice summary of several discussions related to this: http://lists.openstack.org/pipermail/openstack-dev/2017-October/123071.html Note the entries in there about how several deployments don't rely on nova's keypair interface because of its clunky nature, and other ideas about getting nova out of the keypair business altogether and instead let barbican manage that and nova just references a key resource in barbican. Before we'd consider making incremental changes to nova's keypair interface and user/project scoping, I think we would need to think through that barbican route and what it could look like and how it might benefit everyone. -- Thanks, Matt From mriedemos at gmail.com Fri Jul 27 19:21:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Jul 2018 14:21:53 -0500 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: On 7/27/2018 2:14 PM, Matt Riedemann wrote: >>  From checking the history and review discussion on [3], it seems that >> it was like that from staring. key_pair quota is being counted when >> actually creating the keypair but it is not shown in API 'in_use' field. > > Just so I'm clear which API we're talking about, you mean there is no > totalKeypairsUsed entry in > https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits > correct? Nevermind I see it now: https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota We have too many quota-related APIs. -- Thanks, Matt From jaypipes at gmail.com Fri Jul 27 19:48:44 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 27 Jul 2018 15:48:44 -0400 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: <8e2e49ca-663c-6092-ca92-f2b8e1b58dc0@gmail.com> On 07/27/2018 03:21 PM, Matt Riedemann wrote: > On 7/27/2018 2:14 PM, Matt Riedemann wrote: >>>  From checking the history and review discussion on [3], it seems >>> that it was like that from staring. key_pair quota is being counted >>> when actually creating the keypair but it is not shown in API >>> 'in_use' field. >> >> Just so I'm clear which API we're talking about, you mean there is no >> totalKeypairsUsed entry in >> https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits >> correct? > > Nevermind I see it now: > > https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota > > We have too many quota-related APIs. Yes. Yes we do. -jay From fungi at yuggoth.org Fri Jul 27 19:54:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 27 Jul 2018 19:54:27 +0000 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B58B6AE.1@windriver.com> Message-ID: <20180727195427.msllsxbb4e5srs6w@yuggoth.org> On 2018-07-27 14:20:01 -0500 (-0500), Matt Riedemann wrote: [...] > Note the entries in there about how several deployments don't rely > on nova's keypair interface because of its clunky nature, and > other ideas about getting nova out of the keypair business > altogether and instead let barbican manage that and nova just > references a key resource in barbican. Before we'd consider making > incremental changes to nova's keypair interface and user/project > scoping, I think we would need to think through that barbican > route and what it could look like and how it might benefit > everyone. If the Nova team is interested in taking it in that direction, I'll gladly lobby to convert the "A Castellan-compatible key store" entry at https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services to a full on "Barbican" entry (similar to the "Keystone" entry). The only thing previously standing in the way was a use case for a fundamental feature from the trademark programs' interoperability set. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Jean-Philippe at evrard.me Fri Jul 27 21:49:51 2018 From: Jean-Philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 27 Jul 2018 21:49:51 +0000 Subject: [openstack-dev] [charms] PTL non-candidacy for Stein cycle In-Reply-To: References: Message-ID: <3C923DA7-DD7A-4919-97E2-C70867EBDA2E@evrard.me> On July 27, 2018 4:09:04 PM UTC, James Page wrote: >Hi All > >I won't be standing for PTL of OpenStack Charms for this upcoming >cycle. > >Its been my pleasure to have been PTL since the project was accepted >into >OpenStack, but its time to let someone else take the helm. I'm not >going >anywhere but expect to have a bit of a different focus for this cycle >(at >least). > >Cheers > >James Thanks for the work done on a cross project level and your communication! JP (evrardjp) -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Sat Jul 28 15:25:53 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Sat, 28 Jul 2018 17:25:53 +0200 Subject: [openstack-dev] [charms] PTL candidacy for Stein cycle Message-ID: Hello all, I hereby announce my candidacy for PTL of the OpenStack Charms project [0]. Through the course of the past two years I have made many contributions to the Charms projects and I have had the privilege of becoming a Core developer. Prior to focusing on the Charms project I have made upstream contributions in other OpenStack projects and I have followed the unfolding and development of the OpenStack community with great interest. We live in exciting times and I believe great things are afoot for OpenStack as a stable, versatile and solid contender in the cloud space. It would be my privilege to be able to help further that along as PTL for the Charms project. Our project has a strong and disperse group of contributors and we are blessed with motivated and assertive people taking interest in maintaining existing code as well as developing new features. The most important aspect of my job as PTL will be to make sure we maintain room for the diversity of contributions without losing velocity and direction. Maintaining and developing our connection with the broader OpenStack community will also be of great importance. Some key areas of focus for Stein cycle: - Python 3 migration - The clock is ticking for Python 2 and we need to continue the drive towards porting all our code to Python 3 - Continue modernization of test framework - Sustained software quality is only as good as you can prove through the quality of your unit and functional tests. - Great progress has been made this past cycle in developing and extending functionality of a new framework for our functional tests and we need to continue this work. - Continue to build test driven development culture, and export this culture to contributors outside the core team. - [Multi-cycle] Explore possibilities and methodologies for Classic -> layered Reactive Charm migrations - A lot of effort has been put into the Reactive Charm framework and the reality of writing a new Charm today is quite different from what it was just a few years ago. - The time and effort needed to maintain a layered Reactive Charm is also far less than what it takes to maintain a classic Charm. - There are many hard and difficult topics surrounding such a migration but I think it is worth spending some time exploring our options of how we could get there. - Evaluate use of upstream release tools - The OpenStack release team has put together some great tools that might make our release duties easier. Let us evaluate adopting some of them for our project. 0: https://review.openstack.org/#/c/586821/ -- Frode Nordahl (IRC: fnordahl) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Sat Jul 28 22:19:41 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Sun, 29 Jul 2018 01:19:41 +0300 Subject: [openstack-dev] [tripleo] network isolation can't find files referred to on director In-Reply-To: References: Message-ID: Hi, With my nic configs I get the following error 2018-07-26 16:42:49Z [overcloud.ComputeGammaV3.0.NetworkConfig]: CREATE_FAILED resources.NetworkConfig: Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:49Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Resource CREATE failed: resources.NetworkConfig: Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:50Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED resources.NetworkConfig: resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:50Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Resource CREATE failed: resources.NetworkConfig: resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED resources.ComputeGammaV3: Resource CREATE failed: resources.NetworkConfig: resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:51Z [overcloud]: CREATE_FAILED Resource CREATE failed: resources.ComputeGammaV3: Resource CREATE failed: resources.NetworkConfig: resources[0].Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: 2018-07-26 16:42:51Z [overcloud.ComputeGammaV3.0.NetIpMap]: CREATE_COMPLETE state changed Stack overcloud CREATE_FAILED overcloud.ComputeGammaV3.0.NetworkConfig: resource_type: OS::TripleO::ComputeGammaV3::Net::SoftwareConfig physical_resource_id: status: CREATE_FAILED status_reason: | resources.NetworkConfig: Parameter 'InternalApiNetworkVlanID' is invalid: could not convert string to float: Heat Stack create failed. Heat Stack create failed. (undercloud) [stack at staging-director ~]$ packet_write_wait: Connection to 192.168.50.30 port 22: Broken pipe The parameter is defined as following in nic config file InternalApiNetworkVlanID: default: '' description: Vlan ID for the internal_api network traffic. type: number I worked fine when I was using RHOSP11(Ocata) The custom_network_data.yaml defines the internal network as following - name: InternalApi name_lower: internal_api vip: true vlan: 711 ip_subnet: '172.16.2.0/24' allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] Samuel On Fri, Jul 27, 2018 at 7:41 PM, James Slagle wrote: > On Thu, Jul 26, 2018 at 4:58 AM, Samuel Monderer > wrote: > > Hi James, > > > > I understand the network-environment.yaml will also be generated. > > What do you mean by rendered path? Will it be > > "usr/share/openstack-tripleo-heat-templates/network/ports/"? > > Yes, the rendered path is the path that the jinja2 templating process > creates. > > > By the way I didn't find any other place in my templates where I refer to > > these files? > > What about custom nic configs is there also a jinja2 process to create > them? > > No. custom nic configs are by definition, custom to the environment > you are deploying. Only you know how to properly define what newtork > configurations needs applying. > > Our sample nic configs are generated from jinja2 now. For example: > tripleo-heat-templates/network/config/single-nic-vlans/role.role.j2.yaml > > If you wanted to follow that pattern such that your custom nic config > templates were generated, you could do that > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Sat Jul 28 23:49:34 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sun, 29 Jul 2018 09:49:34 +1000 Subject: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project! In-Reply-To: <1531849678-sup-8719@lrrr.local> References: <1531849678-sup-8719@lrrr.local> Message-ID: <20180728234933.GP30070@thor.bakeyournoodle.com> On Tue, Jul 17, 2018 at 01:52:39PM -0400, Doug Hellmann wrote: > The Adjutant team's application [1] to become an official project > has been approved. Welcome! > > As I said on the review, because it is past the deadline for Rocky > membership, Adjutant will not be considered part of the Rocky > release, but a future release can be part of Stein. > > The team should complete the onboarding process for new projects, > including holding PTL elections for Stein, Now would be a good time to do this :) See: https://governance.openstack.org/election/#how-to-submit-a-candidacy for details Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tpb at dyncloud.net Sun Jul 29 20:20:07 2018 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 29 Jul 2018 16:20:07 -0400 Subject: [openstack-dev] [manila][PTL][Election] PTL candidacy for the Stein cycle Message-ID: <20180729202007.o3pxakojb2upbaic@barron.net> Fellow Stackers, I just served a term as Manila PTL for Rocky and am writing to say that if you choose me I'd like to also take on that role for the Stein release cycle. I think I've learned the mechanics now and can focus more energy on priorities. Today manila itself is pretty solid. It doesn't need lots of new features. Back end vendors always want to expose new bells and whistles, and that's fine if they help with the review load and contribute to the community. Reciprocity makes the world go around. But I see the adoption curve for manila just now ramping up and my own focus will be to enable that by working to harden manila and to make it easier to use, both within and outside of openstack itself. Manila offers file-shares as a service -- self-service, RWX, random access storage -- and abstracts over a variety of file-systems and sharing protocols. Manila doesn't care if the consumers of the file systems live within openstack or not. It's just a matter of network reachability and the access rights that manila manages. Besides being able to run as one part of a full openstack deployment, manila can run on its own, with keystone to enable multi-tenancy, or completely standalone. So I see manila as a true Open Infrastructure project. It can turn a rack of unconfigured equipment into self-service shared file systems without limiting itself to the (very important) Virtual Private Server use case [1]. I will, accordingly, work to position manila as *the* open source solution for deploying RWX random access storage across data centers and across clouds. To that end we need to: * get manila into generalized cloud providers like CSI [2] * get manila into the openstack sdk and openstack client * get more of the almost thirty manila back ends exposed in production-quality deployment tools like tripleo, kolla-*, and juju. * continue to fix bugs, improve our CI, run more stuff in gate with python 3 These are the things that will drive me if you choose me as manila PTL. Thanks for listening, -- Tom Barron (tbarron) [1] https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations [2] https://github.com/container-storage-interface/spec/blob/master/spec.md From miguel at mlavalle.com Sun Jul 29 21:42:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 29 Jul 2018 16:42:45 -0500 Subject: [openstack-dev] [Neutron][PTL][Election] PTL candidacy for the Stein cycle Message-ID: Hello OpenStackers, I write this to submit my candidacy for Neutron PTL during the Stein cycle. Being PTL of this project during the Queens and Rocky cycles has been the highest honor of my career and I want to have another shot at the very rewarding job of helping the community to deliver better networking functionality in OpenStack. We had a successful Rocky cycle, delivering on most of the goals we set for ourselves in Dublin: * Port forwardings for floating IPs was a feature under planning for several cycles. In Rocky we rolled up our sleeves and implemented it. * We made the behavior of our API more consistent by properly handling filters in requests. * We had an excellent cross project experience with Nova, implementing multiple port bindings to better support live instances migration. * We made great progress moving generic DB functionality to neutron-lib and consuming it from Neutron and the Stadium projects. * We extended the logging API to support FWaaS v2.0. Moving forward, these are some of the goals that I propose for the team during the Stein cycle: * Conclude the implementation of bandwidth based scheduling, that will enable Neutron and Nova to guarantee network bandwidth to instances based on QoS policies. * Implement DVR-aware announcement of fixed IPs in neutron-dynamic-routing. * Continue extending Neutron QoS to support L3 router gateway IPs and VPN services. * Conclude specifying and implement port mirroring for SR-IOV VF to VF mirroring. * Extend the logging API to support SNAT. * Improve the performance of port creation in bulk. * Neutron makes extensive use of the Ryu library, which will not be supported by its implementor anymore. As decided in Vancouver, we will fork it and continue supporting it in the OpenStack community. * We will continue our efforts of de-coupling common functionality from the Neutron repository and moving it to neutron-lib. * With the recent departure of some of our contributors, we need to strengthen our team of core reviewers. I have recently been working with some of our team members towards this goal and will propose nominations in the up-coming cycle. Thank you for your consideration and for taking the time to read this Miguel Lavalle (mlavalle) ~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sun Jul 29 22:16:37 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 29 Jul 2018 18:16:37 -0400 Subject: [openstack-dev] [Zun] Change of code review policy Message-ID: Hi all, I would like to announce a change on the code review process in Zun. Traditionally, the code review policy requires *two* +2 to approve a patch. Now, it changes to require only *one* +2 to approve. This change is trying to speed up the code review process and accelerate the merging of patches. This change has been discussed internally within the core team and the feedback is unanimously. Please feel free to reach out if there is any question or concern. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sun Jul 29 22:55:42 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 30 Jul 2018 10:55:42 +1200 Subject: [openstack-dev] [qinling] [PTL] [Election] PTL candidacy for the Stein cycle Message-ID: Hi all, I'm writing this email to propose myself as Qinling PTL for Stein dev cycle. I have been serving as Qinling PTL in Rocky, the first dev cycle for Qinling as an official OpenStack project. Qinling is a small team for now but we did have made significant improvements and enhancements during Rocky, e.g. support TLS communication with k8s API server, support untrusted runtime so that Qinling can leverage the security container technology such as Kata container and gVisor to run untrusted functions, we also add function alias support to make it easy to invoke the function for the function consumers. Additionally, Qinling documentation also has improved a little bit thanks to all the contributors. Although there are a bunch of competitors to Qinling in serverless area especially in k8s ecosystem, the main difference between Qinling and other solutions is OpenStack is always the first citizen in Qinling's world, support the integration with other OpenStack services is always our first priority. So we won't compete with other FaaS projects and we don't care. So speaking of Stein, I'd like to take some time to focus on the next set of challenges to make Qinling production ready as soon as possible(it's also an internal goal in my own team): - Continue to work on the runtime security issue - High availability of qinling-api and qinling-engine - More intelligent execution scale algorithm Besides, all other contributions to Qinling are welcomed. Thank you all for taking a moment to read what I have to say. Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Mon Jul 30 01:21:26 2018 From: soulxu at gmail.com (Alex Xu) Date: Mon, 30 Jul 2018 09:21:26 +0800 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: <5B59E74F.3070704@windriver.com> References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> <5B58B6AE.1@windriver.com> <5B59E74F.3070704@windriver.com> Message-ID: Oh, right, sorry, I'm keeping to think about its about the user in specific tenant usage just like other resources. You are right, Keypair has nothing about the tenant, only about the user. Thanks. 2018-07-26 23:22 GMT+08:00 Chris Friesen : > On 07/25/2018 06:22 PM, Alex Xu wrote: > >> >> >> 2018-07-26 1:43 GMT+08:00 Chris Friesen > >: >> > > Keypairs are weird in that they're owned by users, not projects. This >> is >> arguably wrong, since it can cause problems if a user boots an >> instance with >> their keypair and then gets removed from a project. >> >> Nova microversion 2.54 added support for modifying the keypair >> associated >> with an instance when doing a rebuild. Before that there was no >> clean way >> to do it. >> >> >> I don't understand this, we didn't count the keypair usage with the >> instance >> together, we just count the keypair usage for specific user. >> > > > I was giving an example of why it's strange that keypairs are owned by > users rather than projects. (When instances are owned by projects, and > keypairs are used to access instances.) > > > Chris > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Mon Jul 30 01:26:59 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Mon, 30 Jul 2018 09:26:59 +0800 Subject: [openstack-dev] [trove] Considering the transfter of the project leadership Message-ID: > > Since the new folks are still so new - if this works for you - I would > recommend continuing on as the official PTL for one more release, but with > the > understanding that you would just be around to answer questions and give > advice > to help the new team get up to speed. That should hopefully be a small time > commitment for you while still easing that transition. > > Then hopefully by the T release it would not be an issue at all for someone > else to step up as the new PTL. Or even if things progress well, you could > step > down as PTL at some point during the Stein cycle if someone is ready to > take > over for you. > Sean, thanks a lot for these helpful suggestions. I thought about doing it this way before writing this post, and this is also the reason I asked the current active team members to nominate theselves. However, it's sad that the other active team members seems also busy on other thing. So I think it may be better Dariusz and his team could do more than us on the project in the next cycle. I believe they're experience on the project , and all other experiences about the whole OpenStack environment could be more familiar in the daily pariticipation of the project. On the other hand, I can also understand the lack of time to be a PTL since > it requires probably a lot of time to coordinate all the work. Dariusz, no, the current team is really a small team, so in fact I didn't need to do much coordination. The pain is that almost none of the current active team member are not focusing Trove, so even thought all of us want to do more progress in this cycle, we're not able to. This also the reason all of us think it's great to have to team focusing on the project could join. So, we don't have much time on the PTL election now, Dariusz, would you please discuss with your team who will do the nomination. And then we'll see if everything could work. We could also try to merge one the trove-tempest-plugin patches(https://review.openstack.org/#/c/580763/ could be merged first before we get the CI could test all the cases in the repo, sadlly currently we cannot the other patches as they're cannot be tested). However that patch is submitted by Krzysztof, though is authored by Dariusz. I don't know whether this could count as an identifiied commit when applying PTL nomination. And last, I want to repeat that, I'll still in the Trove delepoment for quit a long time, so I will help the new PTL and new contributor on everything I could. Thanks again for everyone who help me a lot in the last cycle, especially Fan Zhang, zhanggang, wangyao, song.jian and Manoj Kumar. -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Mon Jul 30 01:31:10 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Mon, 30 Jul 2018 09:31:10 +0800 Subject: [openstack-dev] [trove] Considering the transfter of the project leadership Message-ID: I found I made a mistake on the subject of this mail, so I corrected it in my last response, hope this won't make any further confusion. On Mon, Jul 30, 2018 at 9:26 AM, 赵超 wrote: > Since the new folks are still so new - if this works for you - I would >> recommend continuing on as the official PTL for one more release, but >> with the >> understanding that you would just be around to answer questions and give >> advice >> to help the new team get up to speed. That should hopefully be a small >> time >> commitment for you while still easing that transition. >> >> Then hopefully by the T release it would not be an issue at all for >> someone >> else to step up as the new PTL. Or even if things progress well, you >> could step >> down as PTL at some point during the Stein cycle if someone is ready to >> take >> over for you. >> > > Sean, thanks a lot for these helpful suggestions. I thought about doing > it this way before writing this post, and this is also the reason I asked > the current active team members to nominate theselves. > > However, it's sad that the other active team members seems also busy on > other thing. So I think it may be better Dariusz and his team could do more > than us on the project in the next cycle. I believe they're experience on > the project , and all other experiences about the whole OpenStack > environment could be more familiar in the daily pariticipation of the > project. > > On the other hand, I can also understand the lack of time to be a PTL >> since it requires probably a lot of time to coordinate all the work. > > > Dariusz, no, the current team is really a small team, so in fact I didn't > need to do much coordination. The pain is that almost none of the current > active team member are not focusing Trove, so even thought all of us want > to do more progress in this cycle, we're not able to. This also the reason > all of us think it's great to have to team focusing on the project could > join. > > So, we don't have much time on the PTL election now, Dariusz, would you > please discuss with your team who will do the nomination. And then we'll > see if everything could work. We could also try to merge one the > trove-tempest-plugin patches(https://review.openstack.org/#/c/580763/ > could be merged first before we get the CI could test all the cases in the > repo, sadlly currently we cannot the other patches as they're cannot be > tested). > > However that patch is submitted by Krzysztof, though is authored by > Dariusz. I don't know whether this could count as an identifiied commit > when applying PTL nomination. > > And last, I want to repeat that, I'll still in the Trove delepoment for > quit a long time, so I will help the new PTL and new contributor on > everything I could. > > Thanks again for everyone who help me a lot in the last cycle, especially > Fan Zhang, zhanggang, wangyao, song.jian and Manoj Kumar. > > -- > To be free as in freedom. > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Jul 30 01:35:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 30 Jul 2018 11:35:20 +1000 Subject: [openstack-dev] [all][Election] Last days for PTL nomination Message-ID: <20180730013519.GA4829@thor.bakeyournoodle.com> Hello all, A quick reminder that we are in the last hours for PTL candidate nominations. If you want to stand for PTL, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: Nominations started @ 2018-07-24 23:45:00 UTC Nominations end @ 2018-07-31 23:45:00 UTC Nominations duration : 7 days, 0:00:00 Nominations remaining : 1 day, 22:12:07 Nominations progress : 72.50% --------------------------------------------------- Projects[2] : 65 Projects with candidates : 29 ( 44.62%) Projects with election : 0 ( 0.00%) --------------------------------------------------- Need election : 0 () Need appointment : 36 (Adjutant Blazar Cinder Designate Documentation Dragonflow Freezer Horizon Ironic Kolla Loci Manila Masakari Monasca Nova Octavia OpenStackAnsible OpenStackClient OpenStack_Helm Oslo Packaging_Rpm Puppet_OpenStack Qinling Rally RefStack Sahara Searchlight Security Solum Storlets Trove Vitrage Watcher Winstackers Zaqar Zun) =================================================== Stats gathered @ 2018-07-30 01:32:53 UTC This means that with approximately 2 days left, 39 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. Thank you, [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy [2] Assuming the open reviews below are validated https://review.openstack.org/#/q/is:open+project:openstack/election Which ATM includes: Magnum Tacker OpenStack_Charms Neutron Manilla Tripleo Barbican Murano [3] http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dharmendra.kushwaha at india.nec.com Mon Jul 30 04:07:21 2018 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Mon, 30 Jul 2018 04:07:21 +0000 Subject: [openstack-dev] [Tacker] [PTL] [Elections] Candidacy for Tacker PTL (Stein) Message-ID: Hi all, I would like to announce my candidacy as Tacker PTL for the upcoming Stein cycle. I am Dharmendra Kushwaha known as dkushwaha on IRC. I member of Tacker community since Mitaka release. During my journey, I was involved in multiple features development activities(like Network Service, alarm based monitoring, VNFFG-NS etc.), bug triages, fixes and code improvement activities, verifying and testing for team. It is a great experience for me to working in OpenStack/Tacker project with very supportive contributors team. Other than community, I was involved in couple of NFV related PoCs and to identify the production gaps in Tacker. As of now Sridhar and Gong sheng Yang have done a great job with a great team support, I have learnt many things from them and would like to serve the community on those footprints. During the journey, Tacker comes with multiple rich features and still growing in the same direction. Still we have to do lot more and my main goals of Stein are as follow: * Tacker CI/CD Improvement: - Currently in Tacker more scenarios and proper integration testing is lacking on gate, needs to introduce more coverage. - Focus to introduce more functional and scenario tests for max code coverage. - We have to setup a process where every code should go with proper integration test on gate. * Tacker stability & production ready: - Identify industries requirements and prioritize their requirements. - Focus to have more error-handling and significant logging. - Cross community contribution for features integration. - Parallel/large node deployment stability. - More towards NFV-MANO rich features. * Growing the community with more active core contributors * More physical gathering of Tacker team in OpenStack conferences.. You can find my complete contributions here: http://stackalytics.com/?release=all&project_type=all&metric=commits&user_id=dharmendra-kushwaha Thanks for reading and consideration my candidacy. Thanks & Regards Dharmendra Kushwaha IRC: dkushwaha From gong.yongsheng at 99cloud.net Mon Jul 30 04:59:15 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Mon, 30 Jul 2018 12:59:15 +0800 (CST) Subject: [openstack-dev] [all][Election] candidacy for Tacker PTL stein cycle Message-ID: Hi, This is my self-nomination to continue running as Tacker PTL for the stein cycle. In Rocky Cycle, we got some big features: - NS with vnffg feature, which completes the last part from ETSI concept. - Placement policy which makes more choices for us to place VNF's VDUs - Enable multiple node CI tests and the activities from developers are also coming up. In Rocky cycle, I plan to do three top priorities among others: - Continue to stablize tacker and make it of production quality by breaking tacker server into more components. - Make tacker more ESTI compatible - Enhance container based VNF Thanks for your consideration. irc: gongysh review for tacker project: http://stackalytics.com/?metric=marks&module=tacker&user_id=gongysh -- 龚永生 九州云信息科技有限公司 99CLOUD Co. Ltd. 邮箱(Email):gong.yongsheng at 99cloud.net 地址:北京市海淀区上地三街嘉华大厦B座806 Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, Haidian District, Beijing, China 手机(Mobile):+86-18618199879 公司网址(WebSite):http://99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From sxmatch1986 at gmail.com Mon Jul 30 06:01:41 2018 From: sxmatch1986 at gmail.com (hao wang) Date: Mon, 30 Jul 2018 14:01:41 +0800 Subject: [openstack-dev] [all][Election] candidacy for Zaqar PTL Stein cycle Message-ID: Hi all, This is my self-nomination to continue running as Zaqar PTL for the Stein cycle. We did a great job in Rocky. Supported the different format of client id to be more suitable for more use cases. We also introduced the function to query queues filtered by name and metadata in mongodb backend. For Stein release, I want to finish the list of jobs below. 1. Refactoring We want to remove useless pool group totally in Stein and make the model of pool and flavor more clear. 2. Scalability We will continue our work to improve Zaqar's performance under the different cases of load increasing: 1) number of publishers 2) number of subscribers 3) rate of messages published or consumed 4) number of messages 5) number of queues 6) size of messages 3. Usability We still have some works that inherit from Rocky. Those tasks contain very useful features: 1) Introduce a new resource for queue's metadata 2) Introduce topic resource for notification 3) Delete message with claim ID 4) Send Email subscription by Zaqar Thanks for your consideration! From aschadin at sbcloud.ru Mon Jul 30 06:13:27 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Mon, 30 Jul 2018 06:13:27 +0000 Subject: [openstack-dev] [watcher] PTL on vacation Message-ID: Hi all, I'll be on vacation until August 10th and am available for emails. P.S. My candidacy for Stein cycle will be submitted this evening. Best wishes Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Mon Jul 30 07:15:50 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 30 Jul 2018 09:15:50 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B50A476.8010606@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> Message-ID: Hi Jimmy, Korean and German version are now done on the new format. Can you check publishing? thx Frank Am 2018-07-19 16:47, schrieb Jimmy McArthur: > Hi all - > > Follow up on the Edge paper specifically: > https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 > > This is now available. As I mentioned on IRC this morning, it should > be VERY close to the PDF. Probably just needs a quick review. > > Let me know if I can assist with anything. > > Thank you to i18n team for all of your help!!! > > Cheers, > Jimmy > > Jimmy McArthur wrote: >> Ian raises some great points :) I'll try to address below... >> >> Ian Y. Choi wrote: >>> Hello, >>> >>> When I saw overall translation source strings on container >>> whitepaper, I would infer that new edge computing whitepaper >>> source strings would include HTML markup tags. >> One of the things I discussed with Ian and Frank in Vancouver is the >> expense of recreating PDFs with new translations. It's prohibitively >> expensive for the Foundation as it requires design resources which we >> just don't have. As a result, we created the Containers whitepaper in >> HTML, so that it could be easily updated w/o working with outside >> design contractors. I indicated that we would also be moving the Edge >> paper to HTML so that we could prevent that additional design resource >> cost. >>> On the other hand, the source strings of edge computing whitepaper >>> which I18n team previously translated do not include HTML markup >>> tags, since the source strings are based on just text format. >> The version that Akihiro put together was based on the Edge PDF, which >> we unfortunately didn't have the resources to implement in the same >> format. >>> >>> I really appreciate Akihiro's work on RST-based support on publishing >>> translated edge computing whitepapers, since >>> translators do not have to re-translate all the strings. >> I would like to second this. It took a lot of initiative to work on >> the RST-based translation. At the moment, it's just not usable for >> the reasons mentioned above. >>> On the other hand, it seems that I18n team needs to investigate on >>> translating similar strings of HTML-based edge computing whitepaper >>> source strings, which would discourage translators. >> Can you expand on this? I'm not entirely clear on why the HTML based >> translation is more difficult. >>> >>> That's my point of view on translating edge computing whitepaper. >>> >>> For translating container whitepaper, I want to further ask the >>> followings since *I18n-based tools* >>> would mean for translators that translators can test and publish >>> translated whitepapers locally: >>> >>> - How to build translated container whitepaper using original >>> Silverstripe-based repository? >>> https://docs.openstack.org/i18n/latest/tools.html describes well >>> how to build translated artifacts for RST-based OpenStack >>> repositories >>> but I could not find the way how to build translated container >>> whitepaper with translated resources on Zanata. >> This is a little tricky. It's possible to set up a local version of >> the OpenStack website >> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md). >> However, we have to manually ingest the po files as they are >> completed and then push them out to production, so that wouldn't do >> much to help with your local build. I'm open to suggestions on how we >> can make this process easier for the i18n team. >> >> Thank you, >> Jimmy >>> >>> >>> With many thanks, >>> >>> /Ian >>> >>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>> Frank, >>>> >>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>> mentioned in a prior thread, the RST format that Akihiro worked did >>>> not work with the Zanata process that we have been using with our >>>> CMS. Additionally, the existing EDGE page is a PDF, so we had to >>>> build a new template to work with the new HTML whitepaper layout we >>>> created for the Containers paper. I outlined this in the thread " >>>> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >>>> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >>>> with the template around 7/13. >>>> >>>> We completed the work on the new whitepaper template and then put >>>> out the pot files on Zanata so we can get the po language files >>>> back. If this process is too cumbersome for the translation team, >>>> I'm open to discussion, but right now our entire translation process >>>> is based on the official OpenStack Docs translation process outlined >>>> by the i18n team: >>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>> >>>> Again, I realize Akihiro put in some work on his own proposing the >>>> new translation type. If the i18n team is moving to this format >>>> instead, we can work on redoing our process. >>>> >>>> Please let me know if I can clarify further. >>>> >>>> Thanks, >>>> Jimmy >>>> >>>> Frank Kloeker wrote: >>>>> Hi Jimmy, >>>>> >>>>> permission was added for you and Sebastian. The Container >>>>> Whitepaper is on the Zanata frontpage now. But we removed Edge >>>>> Computing whitepaper last week because there is a kind of >>>>> displeasure in the team since the results of translation are still >>>>> not published beside Chinese version. It would be nice if we have a >>>>> commitment from the Foundation that results are published in a >>>>> specific timeframe. This includes your requirements until the >>>>> translation should be available. >>>>> >>>>> thx Frank >>>>> >>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>> Sorry, I should have also added... we additionally need >>>>>> permissions so >>>>>> that we can add the a new version of the pot file to this project: >>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>> Thanks! >>>>>> Jimmy >>>>>> >>>>>> >>>>>> >>>>>> Jimmy McArthur wrote: >>>>>>> Hi all - >>>>>>> >>>>>>> We have both of the current whitepapers up and available for >>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>> >>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>> Thanks all! >>>>>>> Jimmy >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jean-philippe at evrard.me Mon Jul 30 08:04:47 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Mon, 30 Jul 2018 10:04:47 +0200 Subject: [openstack-dev] =?utf-8?q?=5Bopenstack-ansible=5D_Using_jmespath_?= =?utf-8?q?more?= Message-ID: <2992-5b5ec680-13-7036d78@206243510> Hello, According to the readability test here [1], contributors prefer reading a task like the following: - name: Fail if service was deployed using a different installation method fail: msg: "Switching installation methods for OpenStack services is not supported" when: - ansible_local is defined - ansible_local.openstack_ansible is defined - ansible_local.openstack_ansible.aodh is defined - ansible_local.openstack_ansible.aodh.install_method is defined - ansible_local.openstack_ansible.aodh.install_method != aodh_install_method as: - name: Fail if service was deployed using a different installation method fail: msg: "Switching installation methods for OpenStack services is not supported" when: - (ansible_local | json_query("openstack_ansible.aodh.install_method")) is not "" - ansible_local.openstack_ansible.aodh.install_method != aodh_install_method (Short explanation, json_query returns an empty string if path is not found, instead of having an ansible failure, which is very welcomed. In the case above, if everything is defined, there will be no empty string, and we can compare the string contents with the second when condition). Another example avoiding the "is defined" dance: Checking if install_method IS equal to "source" in the local facts could be simplified to: when: - (ansible_local | json_query("openstack_ansible.aodh.install_method")) == 'source' I hope this will inspire people to refactor some tedious to read tasks into more readable ones. Thanks for your contributions! Best regards, Jean-Philippe Evrard (evrardjp) [1]: https://etherpad.openstack.org/p/osa-readability-test From jean-philippe at evrard.me Mon Jul 30 08:16:44 2018 From: jean-philippe at evrard.me (=?utf-8?q?jean-philippe=40evrard=2Eme?=) Date: Mon, 30 Jul 2018 10:16:44 +0200 Subject: [openstack-dev] =?utf-8?q?=5Bopenstack-ansible=5D_Proposing_Jonat?= =?utf-8?q?han_Rosser_as_core_reviewer?= Message-ID: <58ff-5b5ec980-31-29cd3a00@223498964> Hello everyone, I'd like to propose Jonathan Rosser (jrosser) as core reviewer for OpenStack-Ansible. The BBC team [1] has been very active recently across the board, but worked heavily in our ops repo, making sure the experience is complete for operators. I value Jonathan's opinion (I remember the storage backend conversations for lxc/systemd-nspawn!), and I'd like this positive trend to continue. On top of it Jonathan has been recently reviewing quite a series of patches, and is involved into some of our important work: bringing the Bionic support. Best regards, Jean-Philippe Evrard (evrardjp) [1]: http://stackalytics.com/?project_type=openstack&release=rocky&metric=commits&company=BBC From lhinds at redhat.com Mon Jul 30 08:23:57 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 30 Jul 2018 15:23:57 +0700 Subject: [openstack-dev] [all][Election] Last days for PTL nomination In-Reply-To: <20180730013519.GA4829@thor.bakeyournoodle.com> References: <20180730013519.GA4829@thor.bakeyournoodle.com> Message-ID: Hi, Security is a SIG and no longer a project (changed as of rocky cycle). Regards Luke On Mon, 30 Jul 2018, 08:36 Tony Breeds, wrote: > Hello all, > > A quick reminder that we are in the last hours for PTL candidate > nominations. > > If you want to stand for PTL, don't delay, follow the instructions > at [1] to make sure the community knows your intentions. > > Make sure your nomination has been submitted to the openstack/election > repository and approved by election officials. > > Election statistics[2]: > Nominations started @ 2018-07-24 23:45:00 UTC > Nominations end @ 2018-07-31 23:45:00 UTC > Nominations duration : 7 days, 0:00:00 > Nominations remaining : 1 day, 22:12:07 > Nominations progress : 72.50% > --------------------------------------------------- > Projects[2] : 65 > Projects with candidates : 29 ( 44.62%) > Projects with election : 0 ( 0.00%) > --------------------------------------------------- > Need election : 0 () > Need appointment : 36 (Adjutant Blazar Cinder Designate > Documentation Dragonflow Freezer Horizon > Ironic Kolla Loci Manila Masakari > Monasca Nova Octavia OpenStackAnsible > OpenStackClient OpenStack_Helm Oslo > Packaging_Rpm Puppet_OpenStack Qinling > Rally RefStack Sahara Searchlight > Security Solum Storlets Trove Vitrage > Watcher Winstackers Zaqar Zun) > =================================================== > Stats gathered @ 2018-07-30 01:32:53 UTC > > > This means that with approximately 2 days left, 39 projects will > be deemed leaderless. In this case the TC will oversee PTL selection as > described by [3]. > > Thank you, > > [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy > [2] Assuming the open reviews below are validated > https://review.openstack.org/#/q/is:open+project:openstack/election > Which ATM includes: > Magnum Tacker OpenStack_Charms Neutron Manilla Tripleo Barbican > Murano > [3] > http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From linghucongsong at 163.com Mon Jul 30 09:03:03 2018 From: linghucongsong at 163.com (linghucongsong) Date: Mon, 30 Jul 2018 17:03:03 +0800 (CST) Subject: [openstack-dev] [all][Election] candidacy for Tricircle PTL Stein cycle Message-ID: <38728742.6c7e.164ea6d3653.Coremail.linghucongsong@163.com> Hi all! I would like to announce myself nomination for the PTL candidacy in Tricircle Stein cycle. My name is Baisen Song, and my IRC handle is songbaisen. I am currently the Core Member of Tricircle for Rocky cycle and have been the most actively participating in the development of this project since last year. I and my team have finished the most BluePrints in Tricircle. During the Rocky cycle, we begin to bring enable mutable configuration to Tricircle, network deletion reliability, reuse the deleted port after the vm have been deleted and recreated in another region, add service function chain support. We also start to implement the new l3 networking model. For the coming Stein cycle, here are some works we can focus on: * Driver based implementation of Trunk, current implementation is plugin-based. * Implement the new cross-Neutron L3 networking model that doesn't depend on host routes. * Improvement the Tricircle work with nova cell2. * Add more unit and smoke test cases. * Make Trio2o and Tricircle work more close together. Thank you for taking the time to consider me for Stein PTL. Hope everyone will enjoy joining the Tricircle. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jul 30 09:47:49 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 30 Jul 2018 11:47:49 +0200 Subject: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: <20180727074731eucms1p384f2e07d3a9d6745f28554173e66a3c1@eucms1p3> References: <20180726144850.GA4574@sm-workstation> <20180727074731eucms1p384f2e07d3a9d6745f28554173e66a3c1@eucms1p3> Message-ID: <43d41960-dcce-6f98-25aa-17a2fbf15d57@openstack.org> Dariusz Krol wrote: > [...] > On the other hand, I can also understand the lack of time to be a PTL > since it requires probably a lot of time to coordinate all the work. > > Let’s wait for Chao Zhao to give his opinion on the topic :) If the PTL delegates the most time-intensive work (release liaison, meeting chair...) then it should not be too much extra work. PTLs are responsible by default for a lot of things in their projects, but all of those things can be delegated to others. -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Mon Jul 30 11:39:05 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 Jul 2018 20:39:05 +0900 Subject: [openstack-dev] [nova] keypair quota usage info for user In-Reply-To: References: <164d0d39b12.cc6ff7f3115505.8532449585113306574@ghanshyammann.com> Message-ID: <164eafc1346.12090597756078.8498277622868907579@ghanshyammann.com> ---- On Sat, 28 Jul 2018 04:21:53 +0900 Matt Riedemann wrote ---- > On 7/27/2018 2:14 PM, Matt Riedemann wrote: > >> From checking the history and review discussion on [3], it seems that > >> it was like that from staring. key_pair quota is being counted when > >> actually creating the keypair but it is not shown in API 'in_use' field. > > > > Just so I'm clear which API we're talking about, you mean there is no > > totalKeypairsUsed entry in > > https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits > > correct? > > Nevermind I see it now: > > https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota Yeah, 'is_use' field under 'keypair' of this API. > > We have too many quota-related APIs. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yamamoto at midokura.com Mon Jul 30 12:32:48 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 30 Jul 2018 21:32:48 +0900 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: hi, Here's a report of my week. I will not attend the neutron meeting due to an overlapping schedule. (sorry!) Issues I failed to triage. Needs a help from someone familiar with DVR. https://bugs.launchpad.net/neutron/+bug/1783470 get_subnet_for_dvr returns SNAT mac instead of gateway in subnet_info https://bugs.launchpad.net/neutron/+bug/1783654 DVR process flow not installed on physical bridge for shared tenant network Critical none High https://bugs.launchpad.net/neutron/+bug/1783306 Invalid auth url Medium https://bugs.launchpad.net/neutron/+bug/1782421 https://bugs.launchpad.net/neutron/+bug/1782421 https://bugs.launchpad.net/neutron/+bug/1780453 openvswitch-agent doesn't try to rebind "binding_failed" ports on startup anymore https://bugs.launchpad.net/neutron/+bug/1783908 dnsmasq does not remove leases for deleted VMs - leases and host files point to different MACS https://bugs.launchpad.net/neutron/+bug/1783965 Openvswtich agent break the existing data plane as not stable server https://bugs.launchpad.net/neutron/+bug/1783968 ovs agent failed to continue to process devices if one of them are failed https://bugs.launchpad.net/neutron/+bug/1783378 Following protocol 73 name change , neutron constants have to be updated too https://bugs.launchpad.net/neutron/+bug/1780883 FWAAS V1: Add or remove firewall rules, caused the status of associated firewall becomes "PENDING_UPDATE" https://bugs.launchpad.net/neutron/+bug/1784006 Instances miss neutron QoS on their ports after unrescue and soft reboot Incomplete https://bugs.launchpad.net/neutron/+bug/1781372 Neutron security group resource logging presents in ovs-agent.log https://bugs.launchpad.net/neutron/+bug/1783261 Neutron-LBaaS v2: create loadbalance of 5 listeners, and add members to each pool, cost about 1 hour https://bugs.launchpad.net/neutron/+bug/1779334 neutron-vpnaas doesn't support local tox targets https://bugs.launchpad.net/neutron/+bug/1779194 neutron-lbaas haproxy agent, when configured with allow_automatic_lbaas_agent_failover = True, after failover, when the failed agent restarts or reconnects to RabbitMQ, it tries to unplug the vif port without checking if it is used by other agent https://bugs.launchpad.net/neutron/+bug/1778735 floatingip not found 404 PecanNotFound https://bugs.launchpad.net/neutron/+bug/1783330 Logging - Error message is not correct in creating network log with incorrect 'resource_type' https://bugs.launchpad.net/neutron/+bug/1780407 There are some errors in neutron_l3_agent and neutron_dhcp_agent after restarting open vswitch with dpdk https://bugs.launchpad.net/neutron/+bug/1784342 AttributeError: 'Subnet' object has no attribute '_obj_network_id' Duplicate https://bugs.launchpad.net/neutron/+bug/1783534 Install and configure OpenStack Neutron for Ubuntu For those who read this boring mail up to here: https://review.openstack.org/#/c/586488/ Bug deputy routines for dummies From juliaashleykreger at gmail.com Mon Jul 30 13:10:48 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 30 Jul 2018 09:10:48 -0400 Subject: [openstack-dev] [ironic][pt][election] Announcing candidacy for Ironic PTL Message-ID: Greetings! I have been truly amazed by our accomplishments of the last six months and I wish to continue this momentum. As such I am announcing my candidacy and self nomination for the position of Ironic PTL. I promise to continue the application of irony. This past cycle has been very eye opening for me and has taught me a lot about the community at large and the challenges they face. My passion has not wavered and I wish to continue enhancing ironic's capabilities. Operators are central to our community, and we need to continue enhancements that help operators but at the same time we need to revisit our old ideas and plans. In a sense we have already started to do this and we need to continue it. Efforts such as splitting iPXE out of PXE make lots of sense and better enables mixed hardware and even architecture fleets to co-exist. My vision for this next cycle is to setup ironic to become the the defacto API driven hardware provisioning toolkit. This will naturally mean some more work and we will need to continue with our momentum and focus on enablement and performance enhancements to improve the user experience. Thank you for your consideration. Julia Kreger (TheJulia) From balazs.gibizer at ericsson.com Mon Jul 30 13:23:02 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 30 Jul 2018 15:23:02 +0200 Subject: [openstack-dev] [nova]Notification update week 31 Message-ID: <1532956982.28884.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- No RC potential notification bug is tracked. No new bug since last week. Features -------- We hit FeatureFreeze. Every tracked bp is merged before FF except verioned notification transformation. That will be re-proposed to Stein to finish up the remaining 7 workitem that is left on the board http://burndown.peermore.com/nova-notification/ Weekly meeting -------------- The next meeting is planned to be held on 31th of July on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180731T170000 Cheers, gibi From mnaser at vexxhost.com Mon Jul 30 13:32:23 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 30 Jul 2018 09:32:23 -0400 Subject: [openstack-dev] [election] [openstack-ansible] Candidacy for OpenStack Ansible Message-ID: Hi everyone: I would like to submit my candidacy to become PTL for the OpenStack Ansible project for the upcoming Stein cycle. I have been personally involved in the deployment of OpenStack for many years now, using all sorts of different deployment tools. Ansible seems like a great choice for deployment OpenStack and I've been using OpenStack Ansible for quite a while now. As PTL, I hope that I can work with the team to focus on the following: # CI - Improve stability of CI for both roles and integrated repo. by using more mirrors. - Start leveraging the integrated repo playbooks inside the role test jobs in order to avoid the duplication and test the OpenStack Ansible path - Once jobs are stable, add integrated jobs to all roles in order to be sure that we don't break the integrated repo with role changes. # Deployment - Continue to work and finalize the addition of distro installation for all distributions. - Aim to start integrating the `systemd` roles and look into seeing the possibility of enabling nspawn and avoiding lxc on CentOS. There's much more to be done, but those are some of the aspects that would help in the stability of this project which is what I feel we need to focus a bit more on. As a deployment project with a limited scope of the operating systems needing to exist, there doesn't seem to be much we can come up with and taking a cycle just to catch up on all the debt, improve stability and make the maintenance of the project easier is extremely useful. I hope to work with the team for the upcoming cycle. Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mordred at inaugust.com Mon Jul 30 13:33:32 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 30 Jul 2018 08:33:32 -0500 Subject: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.1 Message-ID: <7b338469-07f1-bb93-6dcf-dc32e5a63da7@inaugust.com> Heya, I'd like to request a FFE to release 0.17.1 of openstacksdk from stable/rocky. The current rocky release, 0.17.0, added a feature (being able to pass data directly to an object upload rather that requiring a file or file-like object) - but it is broken if you pass an interator because it (senselessly) tries to run len() on the data parameter. The new feature is not used anywhere in OpenStack yet. The first consumer (and requestor of the feature) is Infra, who are looking at using it as part of our efforts to start uploading build log files to swift. We should not need a g-r bump - since nothing in OpenStack uses the feature yet, none of the OpenStack projects need their depends changed. OTOH, openstacksdk is a thing we expect end-users to use, and once they see the shiny new feature they might use it - and then be sad that it's half broken. Thanks! Monty From mnaser at vexxhost.com Mon Jul 30 13:34:29 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 30 Jul 2018 09:34:29 -0400 Subject: [openstack-dev] [puppet] non-candidacy for stein Message-ID: Hi everyone, Unfortunately, I've gotten busy with a few other projects over time and I won't be able to run PTL for the upcoming Stein cycle. I'd like to personally thank all of the current Puppet team due to their help in on-boarding and helping me take on one of my first leadership experiences inside OpenStack, I'm extremely grateful for all of it. I'll continue to be able to do reviews however! Thank you, Mohammed From mrhillsman at gmail.com Mon Jul 30 13:44:31 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 30 Jul 2018 08:44:31 -0500 Subject: [openstack-dev] Reminder: User Committee @ 1800 UTC Message-ID: Hi everyone, UC meeting today in #openstack-uc Agenda: https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Jul 30 13:58:19 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 30 Jul 2018 10:58:19 -0300 Subject: [openstack-dev] [Sahara][PTL][Elections] Candidacy for Sahara PTL Message-ID: Hi Saharans, I would like to nominate myself to act as PTL for Sahara during the Stein cycle. I've been acting as PTL for the last three cycles (Pike, Queens and Rocky) and I believe that we had good results and the project improved well during this time. Moving forward I plan to continue working on the direction of stabilization of the project and improvements of user experience. * Bug triaging: We need to clean up our bug list. This has been a goal on all last cycles and we need to continue this work. * Documentation: Improvements on documentation is always needed and we had some new features introduced and we have to make sure that we keep documentation up to date and user friendly. * Final APIv2 work Sadly we didn't finish this in Rocky and for sure we will have to finish in Stein. * Modularity One of the main features planned for Stein is the split of the plugins from the Sahara code. This will easy the installment and maintenance of Sahara by deployers. I hope that I can continue leading the team and help improve Sahara as much as we can. -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.krol at samsung.com Mon Jul 30 14:07:39 2018 From: d.krol at samsung.com (Dariusz Krol) Date: Mon, 30 Jul 2018 16:07:39 +0200 Subject: [openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: References: Message-ID: <20180730140741eucas1p2d9c133702b5c0c0fd2d96c5c53f71afa~GKrCj3OAf1572015720eucas1p2g@eucas1p2.samsung.com> Hello Zhao Chao, after some internal discussion, I will do the nomination if you decided not to nominate yourself. Thanks for letting know you will be still available in the next release cycle. Regarding commits I would recommend to consider also https://review.openstack.org/#/c/586528/2 . Best, Dariusz Krol On 07/30/2018 03:26 AM, 赵超 wrote: > > Since the new folks are still so new - if this works for you - I would > recommend continuing on as the official PTL for one more release, > but with the > understanding that you would just be around to answer questions > and give advice > to help the new team get up to speed. That should hopefully be a > small time > commitment for you while still easing that transition. > > Then hopefully by the T release it would not be an issue at all > for someone > else to step up as the new PTL. Or even if things progress well, > you could step > down as PTL at some point during the Stein cycle if someone is > ready to take > over for you. > > > Sean, thanks a lot for these helpful suggestions.  I thought about > doing it this way before writing this post, and this is also the > reason I asked the current active team members to nominate theselves. > > However, it's sad that the other active team members seems also busy > on other thing. So I think it may be better Dariusz and his team could > do more than us on the project in the next cycle. I believe they're > experience on the project , and all other experiences about the whole > OpenStack environment could be more familiar in the daily > pariticipation of the project. > > On the other hand, I can also understand the lack of time to be a > PTL since it requires probably a lot of time to coordinate all the > work. > > > Dariusz, no, the current team is really a small team, so in fact I > didn't need to do much coordination. The pain is that almost none of > the current active team member are not focusing Trove, so even thought > all of us want to do more progress in this cycle, we're not able to. > This also the reason all of us think it's great to have to team > focusing on the project could join. > > So, we don't have much time on the PTL election now, Dariusz, would > you please discuss with your team who will do the nomination. And then > we'll see if everything could work. We could also try to merge one the > trove-tempest-plugin patches(https://review.openstack.org/#/c/580763/ > could be merged first before we get the CI could test all the cases in > the repo, sadlly currently we cannot the other patches as they're > cannot be tested). > > However that patch is submitted by Krzysztof, though is authored by > Dariusz. I don't know whether this could count as an identifiied > commit when applying PTL nomination. > > And last, I want to repeat that, I'll still in the Trove delepoment > for quit a long time, so I will help the new PTL and new contributor > on everything I could. > > Thanks again for everyone who help me a lot in the last cycle, > especially Fan Zhang, zhanggang, wangyao, song.jian and Manoj Kumar. > > -- > To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From fungi at yuggoth.org Mon Jul 30 14:17:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Jul 2018 14:17:41 +0000 Subject: [openstack-dev] [all][Election] Last days for PTL nomination In-Reply-To: References: <20180730013519.GA4829@thor.bakeyournoodle.com> Message-ID: <20180730141741.ri4ugsvvvq2csz2x@yuggoth.org> On 2018-07-30 15:23:57 +0700 (+0700), Luke Hinds wrote: > Security is a SIG and no longer a project (changed as of rocky cycle). Technically it's still both at the moment, which is why I proposed https://review.openstack.org/586896 yesterday (tried to give you a heads up in IRC about that as well). A +1 from the current PTL of record on that change would probably be a good idea. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Helen.Walsh at dell.com Mon Jul 30 14:18:43 2018 From: Helen.Walsh at dell.com (Walsh, Helen) Date: Mon, 30 Jul 2018 14:18:43 +0000 Subject: [openstack-dev] [cinder][nova] - Barbican w/Live Migration in DevStack Multinode Message-ID: <6031C821D2144A4CB722005A21B34BD53AD3323D@MX202CL02.corp.emc.com> Hi OpenStack Community, I am having some issues with key management in a multinode devstack (from master branch 27th July '18) environment where Barbican is the configured key_manager. I have followed setup instructions from the following pages: * https://docs.openstack.org/barbican/latest/contributor/devstack.html (manual configuration) * https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-encryption.html So far: * Unencrypted block volumes can be attached to instances on any compute node * Instances with unencrypted volumes can also be live migrated to other compute node * Encrypted bootable volumes created successfully * Instances can be launched using these encrypted volumes when the instance is spawned on demo_machine1 (controller & compute node) * Instances cannot be launched using encrypted volumes when the instance is spawned on demo_machine2 or demo_machine3 (compute only), the same failure can be seen in nova logs from both compute nodes: Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG cinderclient.v3.client [None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] GET call to cinderv3 for http://10.0.0.63/volume/v3/3f22a0262a7b4832a08c24ac0295cbd9/volumes/296148bf-edb8-4c9f-88c2-44464907f7e7/encryption used request id req-71fa7f20-c0bc-46c3-9f07-5866344d31a1 {{(pid=25686) request /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:844}} Jul 30 14:35:18 demo_machine2 nova-compute[25686]: DEBUG os_brick.encryptors [None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Using volume encryption metadata '{u'cipher': u'aes-xts-plain64', u'encryption_key_id': u'da7ee21c-67ff-4d74-95a0-18ee6c25d85a', u'provider': u'luks', u'key_size': 256, u'control_location': u'front-end'}' for connection: {'status': u'attaching', 'detached_at': u'', u'volume_id': u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'attach_mode': u'null', 'driver_volume_type': u'iscsi', 'instance': u'e0dc6eac-09bb-4232-bea7-7b8b161cfa31', 'attached_at': u'2018-07-30T13:35:17.000000', 'serial': u'296148bf-edb8-4c9f-88c2-44464907f7e7', 'data': {'device_path': '/dev/disk/by-id/scsi-SEMC_SYMMETRIX_900049_wy000', u'target_discovered': True, u'encrypted': True, u'qos_specs': None, u'target_iqn': u'iqn.1992-04.com.emc:600009700bcbb7112504018f00000000', u'target_portal': u'192.168.0.60:3260', u'volume_id': u'296148bf-edb8-4c9f-88c2-44464907f7e7', u'target_lun': 1, u'access_mode': u'rw'}} {{(pid=25686) get_encryption_metadata /usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py:125}} Jul 30 14:35:18 demo_machine2 nova-compute[25686]: WARNING keystoneauth.identity.generic.base [None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Failed to discover available identity versions when contacting http://localhost/identity/v3. Attempting to parse version from URL.: NotFound: Not Found (HTTP 404) Jul 30 14:35:18 demo_machine2 nova-compute[25686]: ERROR castellan.key_manager.barbican_key_manager [None req-3c977faa-a64c-4536-82c8-d1dbaf856b99 admin admin] Error creating Barbican client: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Not Found (HTTP 404): DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Not Found (HTTP 404) All instance of Nova have [key_manager] configured as follows: [key_manager] backend = barbican auth_url = http://10.0.0.63/identity/ ### Tried with and without the below config options, same result # auth_type = password # password = devstack # username = barbican Any assistance here would be greatly appreciated, I have spent a lot of time looking for some additional information for the use of Barbican in multinode devstack environments or with live migration but there is nothing out there, everything is for all-in-one environments and I'm not having any issues when everything is on one node. I am wondering if at this point there is something I am missing in terms of services in a multinode devstack environment, qualification of barbican in a multinode environment is outside of the recommended test config but following the docs it looks very straight forward. Some information on the three nodes in my environment are below, if there is any other information I can provide let me know, thanks for the help! Node & Service Breakdown Node 1 (Controller & Compute) stack at demo_machine1:~$ openstack service list +----------------------------------+-------------+----------------+ | ID | Name | Type | +----------------------------------+-------------+----------------+ | 43a1334c755c4c81969565097cc9c30c | cinder | volume | | 52a8927c09154e33900f24c7c95a9f8b | cinderv2 | volumev2 | | 5427a9dff3b6477197062e1747843c4d | nova_legacy | compute_legacy | | 5b319b6d50634661998fdd8dc70a85e3 | nova | compute | | 5ffbb2e9f7c84c9e9601ab7aba0cf5e1 | placement | placement | | 787fd29afe2f41b0bb44f9c301fd22c5 | cinderv3 | volumev3 | | 96813e167b8842aba9d8b94fad67904f | neutron | network | | 993e615a03cc49e3be94840c0b82636b | swift | object-store | | b3834468ffc44f30b792459611f5f4e9 | cinder | block-storage | | cab9ff9e175f4566a1865ea35a377d0d | barbican | key-manager | | d12f710b815442fb970c22087b6e8f4f | glance | image | | eb80de21e42b4e978985db979b175f79 | keystone | identity | +----------------------------------+-------------+----------------+ stack at demo_machine1:~$ openstack endpoint list +----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+ | 00b276609956454d8d80dd0dde0df231 | RegionOne | cinder | volume | True | public | http://10.0.0.63/volume/v1/$(project_id)s | | 18e5d431143d47ed980ee0ffbf0d03d7 | RegionOne | barbican | key-manager | True | public | http://10.0.0.63/key-manager | | 20cfe0a80cc94b6eb8ea8e6784839198 | RegionOne | barbican | key-manager | True | internal | http://10.0.0.63/key-manager | | 3a740b472e7349f19d0cf110c1792122 | RegionOne | cinderv3 | volumev3 | True | public | http://10.0.0.63/volume/v3/$(project_id)s | | 4d957921fe894abba296331869f82f7f | RegionOne | cinderv2 | volumev2 | True | public | http://10.0.0.63/volume/v2/$(project_id)s | | 4df258794fde476ab82502c682848e58 | RegionOne | swift | object-store | True | admin | http://10.0.0.63:8080 | | 719eabec7cb94580af9f928278589878 | RegionOne | keystone | identity | True | public | http://10.0.0.63/identity | | 792f4c99085f4b008643b08aff463759 | RegionOne | keystone | identity | True | admin | http://10.0.0.63/identity | | 9e8c27c6e22f4a70865bfcdd815ed3c0 | RegionOne | cinder | block-storage | True | public | http://10.0.0.63/volume/v3/$(project_id)s | | a271f19f29d443a0b5545626584389d7 | RegionOne | glance | image | True | public | http://10.0.0.63/image | | a975403a2ff149bb88ce2d2227d17a80 | RegionOne | nova | compute | True | public | http://10.0.0.63/compute/v2.1 | | b65b46e83b4547588eb694d63cb5cdd5 | RegionOne | swift | object-store | True | public | http://10.0.0.63:8080/v1/AUTH_$(project_id)s | | bfd1f91ba18b4bc0bc83586ee358a73c | RegionOne | placement | placement | True | public | http://10.0.0.63/placement | | d38a11dcfe824fe28f70b45422277d26 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.0.0.63/compute/v2/$(project_id)s | | ea9139e670e84ff39d1c052347a04695 | RegionOne | neutron | network | True | public | http://10.0.0.63:9696/ | +----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+ stack at demo_machine1:~$ openstack secret store +---------------+---------------------------------------------------------------------------------+ | Field | Value | +---------------+---------------------------------------------------------------------------------+ | Secret href | http://10.0.0.63/key-manager/v1/secrets/72a3955b-a494-4352-b1f6-ae3f322e5656 | | Name | None | | Created | 2018-07-30T12:58:33+00:00 | | Status | ACTIVE | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+---------------------------------------------------------------------------------+ Node 2 & 3 (Compute Only) Services: stack at demo_machine2:~$ sudo systemctl list-unit-files | grep devstack@* devstack at n-api-meta.service enabled devstack at n-cpu.service enabled devstack at q-agt.service enabled stack at demo_machine3:~$ sudo systemctl list-unit-files | grep devstack@* devstack at n-api-meta.service enabled devstack at n-cpu.service enabled devstack at q-agt.service enabled ******************************************************************** Michael McAleer Software Engineer 1, Core Technologies Dell EMC | Enterprise Storage Division Phone: +353 21 428 1729 Michael.Mcaleer at Dell.com Ireland COE, Ovens, Co. Cork, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmagr at redhat.com Mon Jul 30 14:32:30 2018 From: mmagr at redhat.com (Martin Magr) Date: Mon, 30 Jul 2018 16:32:30 +0200 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: Message-ID: On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi wrote: > Your fellow reporter took a break from writing, but is now back on his pen. > > Welcome to the twenty-fifth edition of a weekly update in TripleO world! > The goal is to provide a short reading (less than 5 minutes) to learn > what's new this week. > Any contributions and feedback are welcome. > Link to the previous version: > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html > > +---------------------------------+ > | General announcements | > +---------------------------------+ > > +--> Rocky Milestone 3 is next week. After, any feature code will require > Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a > bug-fix only and stabilization period, until we can push the first stable > version of Rocky. > Hey guys, I would like to ask for FFE for backup and restore, where we ended up deciding where is the best place for the code base for this project (please see [1] for details). We believe that B&R support for overcloud control plane will be good addition to a rocky release, but we started with this initiative quite late indeed. The final result should the support in openstack client, where "openstack overcloud (backup|restore)" would work as a charm. Thanks in advance for considering this feature. Regards, Martin [1] https://review.openstack.org/#/c/582453/ > +--> Next PTG will be in Denver, please propose topics: > https://etherpad.openstack.org/p/tripleoci-ptg-stein > +--> Multiple squads are currently brainstorming a framework to provide > validations pre/post upgrades - stay in touch! > > +------------------------------+ > | Continuous Integration | > +------------------------------+ > > +--> Sprint theme: migration to Zuul v3 (More on > https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI > issue. > +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day on > Ocata. > +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting > > +-------------+ > | Upgrades | > +-------------+ > > +--> Good progress on major upgrades workflow, need reviews! > +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status > > +---------------+ > | Containers | > +---------------+ > > +--> We switched python-tripleoclient to deploy containerized undercloud > by default! > +--> Image prepare via workflow is still work in progress. > +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-st > atus > > +----------------------+ > | config-download | > +----------------------+ > > +--> UI integration is almost done (need review) > +--> Bug with failure listing is being fixed: https://bugs.launchpad. > net/tripleo/+bug/1779093 > +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squ > ad-status > > +--------------+ > | Integration | > +--------------+ > > +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK > etc: https://review.openstack.org/#/q/topic:alternate_plans+ > (status:open+OR+status:merged) (need reviews). > +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-s > tatus > > +---------+ > | UI/CLI | > +---------+ > > +--> Good progress on network configuration via UI > +--> Config-download patches are being reviewed and a lot of testing is > going on. > +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status > > +---------------+ > | Validations | > +---------------+ > > +--> Working on OpenShift validations, need reviews. > +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-s > tatus > > +---------------+ > | Networking | > +---------------+ > > +--> No updates this week. > +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-st > atus > > +--------------+ > | Workflows | > +--------------+ > > +--> No updates this week. > +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status > > +-----------+ > | Security | > +-----------+ > > +--> Working on Secrets management and Limit TripleO users efforts > +--> More: https://etherpad.openstack.org/p/tripleo-security-squad > > +------------+ > | Owl fact | > +------------+ > Elf owls live in a cacti. They are the smallest owls, and live in the > southwestern United States and Mexico. It will sometimes make its home in > the giant saguaro cactus, nesting in holes made by other animals. However, > the elf owl isn’t picky and will also live in trees or on telephone poles. > > Source: http://mentalfloss.com/article/68473/15-mysterious-facts-abo > ut-owls > > Thank you all for reading and stay tuned! > -- > Your fellow reporter, Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Jul 30 14:41:29 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 30 Jul 2018 08:41:29 -0600 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: Message-ID: On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr wrote: > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi wrote: >> >> Your fellow reporter took a break from writing, but is now back on his >> pen. >> >> Welcome to the twenty-fifth edition of a weekly update in TripleO world! >> The goal is to provide a short reading (less than 5 minutes) to learn >> what's new this week. >> Any contributions and feedback are welcome. >> Link to the previous version: >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html >> >> +---------------------------------+ >> | General announcements | >> +---------------------------------+ >> >> +--> Rocky Milestone 3 is next week. After, any feature code will require >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a >> bug-fix only and stabilization period, until we can push the first stable >> version of Rocky. > > > Hey guys, > > I would like to ask for FFE for backup and restore, where we ended up > deciding where is the best place for the code base for this project (please > see [1] for details). We believe that B&R support for overcloud control > plane will be good addition to a rocky release, but we started with this > initiative quite late indeed. The final result should the support in > openstack client, where "openstack overcloud (backup|restore)" would work as > a charm. Thanks in advance for considering this feature. > Was there a blueprint/spec for this effort? Additionally do we have a list of the outstanding work required for this? If it's just these two playbooks, it might be ok for an FFE. But if there's additional tripleoclient related changes, I wouldn't necessarily feel comfortable with these unless we have a complete list of work. Just as a side note, I'm not sure putting these in tripleo-common is going to be the ideal place for this. Thanks, -Alex > Regards, > Martin > > [1] https://review.openstack.org/#/c/582453/ > >> >> +--> Next PTG will be in Denver, please propose topics: >> https://etherpad.openstack.org/p/tripleoci-ptg-stein >> +--> Multiple squads are currently brainstorming a framework to provide >> validations pre/post upgrades - stay in touch! >> >> +------------------------------+ >> | Continuous Integration | >> +------------------------------+ >> >> +--> Sprint theme: migration to Zuul v3 (More on >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) >> +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI >> issue. >> +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day on >> Ocata. >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting >> >> +-------------+ >> | Upgrades | >> +-------------+ >> >> +--> Good progress on major upgrades workflow, need reviews! >> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status >> >> +---------------+ >> | Containers | >> +---------------+ >> >> +--> We switched python-tripleoclient to deploy containerized undercloud >> by default! >> +--> Image prepare via workflow is still work in progress. >> +--> More: >> https://etherpad.openstack.org/p/tripleo-containers-squad-status >> >> +----------------------+ >> | config-download | >> +----------------------+ >> >> +--> UI integration is almost done (need review) >> +--> Bug with failure listing is being fixed: >> https://bugs.launchpad.net/tripleo/+bug/1779093 >> +--> More: >> https://etherpad.openstack.org/p/tripleo-config-download-squad-status >> >> +--------------+ >> | Integration | >> +--------------+ >> >> +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK >> etc: >> https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged) >> (need reviews). >> +--> More: >> https://etherpad.openstack.org/p/tripleo-integration-squad-status >> >> +---------+ >> | UI/CLI | >> +---------+ >> >> +--> Good progress on network configuration via UI >> +--> Config-download patches are being reviewed and a lot of testing is >> going on. >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status >> >> +---------------+ >> | Validations | >> +---------------+ >> >> +--> Working on OpenShift validations, need reviews. >> +--> More: >> https://etherpad.openstack.org/p/tripleo-validations-squad-status >> >> +---------------+ >> | Networking | >> +---------------+ >> >> +--> No updates this week. >> +--> More: >> https://etherpad.openstack.org/p/tripleo-networking-squad-status >> >> +--------------+ >> | Workflows | >> +--------------+ >> >> +--> No updates this week. >> +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status >> >> +-----------+ >> | Security | >> +-----------+ >> >> +--> Working on Secrets management and Limit TripleO users efforts >> +--> More: https://etherpad.openstack.org/p/tripleo-security-squad >> >> +------------+ >> | Owl fact | >> +------------+ >> Elf owls live in a cacti. They are the smallest owls, and live in the >> southwestern United States and Mexico. It will sometimes make its home in >> the giant saguaro cactus, nesting in holes made by other animals. However, >> the elf owl isn’t picky and will also live in trees or on telephone poles. >> >> Source: >> http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls >> >> Thank you all for reading and stay tuned! >> -- >> Your fellow reporter, Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From david.ames at canonical.com Mon Jul 30 14:42:46 2018 From: david.ames at canonical.com (David Ames) Date: Mon, 30 Jul 2018 07:42:46 -0700 Subject: [openstack-dev] [charms] PTL candidacy for Stein cycle In-Reply-To: References: Message-ID: On Sat, Jul 28, 2018 at 8:25 AM, Frode Nordahl wrote: > Hello all, > > I hereby announce my candidacy for PTL of the OpenStack Charms project [0]. > > Through the course of the past two years I have made many contributions to > the Charms projects and I have had the privilege of becoming a Core > developer. > > Prior to focusing on the Charms project I have made upstream contributions > in > other OpenStack projects and I have followed the unfolding and development > of > the OpenStack community with great interest. > > We live in exciting times and I believe great things are afoot for OpenStack > as a stable, versatile and solid contender in the cloud space. It would be > my privilege to be able to help further that along as PTL for the Charms > project. > > Our project has a strong and disperse group of contributors and we are > blessed > with motivated and assertive people taking interest in maintaining existing > code as well as developing new features. > > The most important aspect of my job as PTL will be to make sure we maintain > room for the diversity of contributions without losing velocity and > direction. > Maintaining and developing our connection with the broader OpenStack > community > will also be of great importance. > > Some key areas of focus for Stein cycle: > - Python 3 migration > - The clock is ticking for Python 2 and we need to continue the drive > towards > porting all our code to Python 3 > - Continue modernization of test framework > - Sustained software quality is only as good as you can prove through the > quality of your unit and functional tests. > - Great progress has been made this past cycle in developing and extending > functionality of a new framework for our functional tests and we need to > continue this work. > - Continue to build test driven development culture, and export this > culture > to contributors outside the core team. > - [Multi-cycle] Explore possibilities and methodologies for Classic -> > layered > Reactive Charm migrations > - A lot of effort has been put into the Reactive Charm framework and the > reality of writing a new Charm today is quite different from what it was > just a few years ago. > - The time and effort needed to maintain a layered Reactive Charm is also > far > less than what it takes to maintain a classic Charm. > - There are many hard and difficult topics surrounding such a migration > but I > think it is worth spending some time exploring our options of how we > could > get there. > - Evaluate use of upstream release tools > - The OpenStack release team has put together some great tools that might > make our release duties easier. Let us evaluate adopting some of them > for > our project. > > 0: https://review.openstack.org/#/c/586821/ > > -- > Frode Nordahl (IRC: fnordahl) +1 I am certain Frode will work tirelessly as the Chrams PTL. -- David Ames From smonderer at vasonanetworks.com Mon Jul 30 14:48:36 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Mon, 30 Jul 2018 17:48:36 +0300 Subject: [openstack-dev] [tripleo] deployement fails Message-ID: Hi, I'm trying to deploy a small environment with one controller and one compute but i get a timeout with no specific information in the logs 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_IN_PROGRESS state changed 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_COMPLETE state changed 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack UPDATE cancelled 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack CREATE cancelled 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "Controller" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE cancelled 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack CREATE cancelled 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED resources[0]: Stack CREATE cancelled Stack overcloud CREATE_FAILED overcloud.ComputeGammaV3.0: resource_type: OS::TripleO::ComputeGammaV3 physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 status: CREATE_FAILED status_reason: | resources[0]: Stack CREATE cancelled overcloud.Controller.0: resource_type: OS::TripleO::Controller physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 status: CREATE_FAILED status_reason: | resources[0]: Stack CREATE cancelled Not cleaning temporary directory /tmp/tripleoclient-vxGzKo Not cleaning temporary directory /tmp/tripleoclient-vxGzKo Heat Stack create failed. Heat Stack create failed. (undercloud) [stack at staging-director ~]$ It seems that it wasn't able to configure the OVS bridges (undercloud) [stack at staging-director ~]$ openstack software deployment show 4b4fc54f-7912-40e2-8ad4-79f6179fe701 +---------------+--------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------+ | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 | | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b | | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f | | creation_time | 2018-07-30T13:19:44Z | | updated_time | | | status | IN_PROGRESS | | status_reason | Deploy data available | | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | | action | CREATE | +---------------+--------------------------------------------------------+ (undercloud) [stack at staging-director ~]$ openstack software deployment show a297e8ae-f4c9-41b0-938f-c51f9fe23843 +---------------+--------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------+ | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 | | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 | | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f | | creation_time | 2018-07-30T13:17:29Z | | updated_time | | | status | IN_PROGRESS | | status_reason | Deploy data available | | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | | action | CREATE | +---------------+--------------------------------------------------------+ (undercloud) [stack at staging-director ~]$ Regards, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Jul 30 15:06:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Jul 2018 10:06:07 -0500 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> Message-ID: <5B5F295F.3090608@openstack.org> Frank, We're getting a 404 when looking for the pot file on the Zanata API: https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing As a result, we can't pull the po files. Any idea what might be happening? Seeing the same thing with both papers... Thank you, Jimmy Frank Kloeker wrote: > Hi Jimmy, > > Korean and German version are now done on the new format. Can you > check publishing? > > thx > > Frank > > Am 2018-07-19 16:47, schrieb Jimmy McArthur: >> Hi all - >> >> Follow up on the Edge paper specifically: >> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> >> >> This is now available. As I mentioned on IRC this morning, it should >> be VERY close to the PDF. Probably just needs a quick review. >> >> Let me know if I can assist with anything. >> >> Thank you to i18n team for all of your help!!! >> >> Cheers, >> Jimmy >> >> Jimmy McArthur wrote: >>> Ian raises some great points :) I'll try to address below... >>> >>> Ian Y. Choi wrote: >>>> Hello, >>>> >>>> When I saw overall translation source strings on container >>>> whitepaper, I would infer that new edge computing whitepaper >>>> source strings would include HTML markup tags. >>> One of the things I discussed with Ian and Frank in Vancouver is the >>> expense of recreating PDFs with new translations. It's >>> prohibitively expensive for the Foundation as it requires design >>> resources which we just don't have. As a result, we created the >>> Containers whitepaper in HTML, so that it could be easily updated >>> w/o working with outside design contractors. I indicated that we >>> would also be moving the Edge paper to HTML so that we could prevent >>> that additional design resource cost. >>>> On the other hand, the source strings of edge computing whitepaper >>>> which I18n team previously translated do not include HTML markup >>>> tags, since the source strings are based on just text format. >>> The version that Akihiro put together was based on the Edge PDF, >>> which we unfortunately didn't have the resources to implement in the >>> same format. >>>> >>>> I really appreciate Akihiro's work on RST-based support on >>>> publishing translated edge computing whitepapers, since >>>> translators do not have to re-translate all the strings. >>> I would like to second this. It took a lot of initiative to work on >>> the RST-based translation. At the moment, it's just not usable for >>> the reasons mentioned above. >>>> On the other hand, it seems that I18n team needs to investigate on >>>> translating similar strings of HTML-based edge computing whitepaper >>>> source strings, which would discourage translators. >>> Can you expand on this? I'm not entirely clear on why the HTML based >>> translation is more difficult. >>>> >>>> That's my point of view on translating edge computing whitepaper. >>>> >>>> For translating container whitepaper, I want to further ask the >>>> followings since *I18n-based tools* >>>> would mean for translators that translators can test and publish >>>> translated whitepapers locally: >>>> >>>> - How to build translated container whitepaper using original >>>> Silverstripe-based repository? >>>> https://docs.openstack.org/i18n/latest/tools.html describes well >>>> how to build translated artifacts for RST-based OpenStack repositories >>>> but I could not find the way how to build translated container >>>> whitepaper with translated resources on Zanata. >>> This is a little tricky. It's possible to set up a local version of >>> the OpenStack website >>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md). >>> However, we have to manually ingest the po files as they are >>> completed and then push them out to production, so that wouldn't do >>> much to help with your local build. I'm open to suggestions on how >>> we can make this process easier for the i18n team. >>> >>> Thank you, >>> Jimmy >>>> >>>> >>>> With many thanks, >>>> >>>> /Ian >>>> >>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>> Frank, >>>>> >>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>> mentioned in a prior thread, the RST format that Akihiro worked >>>>> did not work with the Zanata process that we have been using with >>>>> our CMS. Additionally, the existing EDGE page is a PDF, so we had >>>>> to build a new template to work with the new HTML whitepaper >>>>> layout we created for the Containers paper. I outlined this in the >>>>> thread " [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge >>>>> Computing Whitepaper Translation" on 6/25/18 and mentioned we >>>>> would be ready with the template around 7/13. >>>>> >>>>> We completed the work on the new whitepaper template and then put >>>>> out the pot files on Zanata so we can get the po language files >>>>> back. If this process is too cumbersome for the translation team, >>>>> I'm open to discussion, but right now our entire translation >>>>> process is based on the official OpenStack Docs translation >>>>> process outlined by the i18n team: >>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>> >>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>> new translation type. If the i18n team is moving to this format >>>>> instead, we can work on redoing our process. >>>>> >>>>> Please let me know if I can clarify further. >>>>> >>>>> Thanks, >>>>> Jimmy >>>>> >>>>> Frank Kloeker wrote: >>>>>> Hi Jimmy, >>>>>> >>>>>> permission was added for you and Sebastian. The Container >>>>>> Whitepaper is on the Zanata frontpage now. But we removed Edge >>>>>> Computing whitepaper last week because there is a kind of >>>>>> displeasure in the team since the results of translation are >>>>>> still not published beside Chinese version. It would be nice if >>>>>> we have a commitment from the Foundation that results are >>>>>> published in a specific timeframe. This includes your >>>>>> requirements until the translation should be available. >>>>>> >>>>>> thx Frank >>>>>> >>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>> Sorry, I should have also added... we additionally need >>>>>>> permissions so >>>>>>> that we can add the a new version of the pot file to this project: >>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>> Thanks! >>>>>>> Jimmy >>>>>>> >>>>>>> >>>>>>> >>>>>>> Jimmy McArthur wrote: >>>>>>>> Hi all - >>>>>>>> >>>>>>>> We have both of the current whitepapers up and available for >>>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>>> >>>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>>> Thanks all! >>>>>>>> Jimmy >>>>>>> >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Mon Jul 30 15:09:23 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 30 Jul 2018 10:09:23 -0500 Subject: [openstack-dev] [requirements][release] FFE for openstacksdk 0.17.1 In-Reply-To: <7b338469-07f1-bb93-6dcf-dc32e5a63da7@inaugust.com> References: <7b338469-07f1-bb93-6dcf-dc32e5a63da7@inaugust.com> Message-ID: <20180730150923.qlyzxbhaijun3god@gentoo.org> On 18-07-30 08:33:32, Monty Taylor wrote: > Heya, > > I'd like to request a FFE to release 0.17.1 of openstacksdk from > stable/rocky. The current rocky release, 0.17.0, added a feature (being able > to pass data directly to an object upload rather that requiring a file or > file-like object) - but it is broken if you pass an interator because it > (senselessly) tries to run len() on the data parameter. > > The new feature is not used anywhere in OpenStack yet. The first consumer > (and requestor of the feature) is Infra, who are looking at using it as part > of our efforts to start uploading build log files to swift. > > We should not need a g-r bump - since nothing in OpenStack uses the feature > yet, none of the OpenStack projects need their depends changed. OTOH, > openstacksdk is a thing we expect end-users to use, and once they see the > shiny new feature they might use it - and then be sad that it's half broken. > As long as it's only a UC bump you have my ack. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From amy at demarco.com Mon Jul 30 15:22:58 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 30 Jul 2018 10:22:58 -0500 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: <58ff-5b5ec980-31-29cd3a00@223498964> References: <58ff-5b5ec980-31-29cd3a00@223498964> Message-ID: +2 from me! Amy (spotz) On Mon, Jul 30, 2018 at 3:16 AM, jean-philippe at evrard.me < jean-philippe at evrard.me> wrote: > Hello everyone, > > I'd like to propose Jonathan Rosser (jrosser) as core reviewer for > OpenStack-Ansible. > The BBC team [1] has been very active recently across the board, but > worked heavily in our ops repo, making sure the experience is complete for > operators. > > I value Jonathan's opinion (I remember the storage backend conversations > for lxc/systemd-nspawn!), and I'd like this positive trend to continue. On > top of it Jonathan has been recently reviewing quite a series of patches, > and is involved into some of our important work: bringing the Bionic > support. > > Best regards, > Jean-Philippe Evrard (evrardjp) > > [1]: http://stackalytics.com/?project_type=openstack& > release=rocky&metric=commits&company=BBC > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prad at redhat.com Mon Jul 30 15:35:24 2018 From: prad at redhat.com (Pradeep Kilambi) Date: Mon, 30 Jul 2018 11:35:24 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: Message-ID: On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz wrote: > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr wrote: > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > wrote: > >> > >> Your fellow reporter took a break from writing, but is now back on his > >> pen. > >> > >> Welcome to the twenty-fifth edition of a weekly update in TripleO world! > >> The goal is to provide a short reading (less than 5 minutes) to learn > >> what's new this week. > >> Any contributions and feedback are welcome. > >> Link to the previous version: > >> > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html > >> > >> +---------------------------------+ > >> | General announcements | > >> +---------------------------------+ > >> > >> +--> Rocky Milestone 3 is next week. After, any feature code will > require > >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a > >> bug-fix only and stabilization period, until we can push the first > stable > >> version of Rocky. > > > > > > Hey guys, > > > > I would like to ask for FFE for backup and restore, where we ended up > > deciding where is the best place for the code base for this project > (please > > see [1] for details). We believe that B&R support for overcloud control > > plane will be good addition to a rocky release, but we started with this > > initiative quite late indeed. The final result should the support in > > openstack client, where "openstack overcloud (backup|restore)" would > work as > > a charm. Thanks in advance for considering this feature. > > > > Was there a blueprint/spec for this effort? Additionally do we have a > list of the outstanding work required for this? If it's just these two > playbooks, it might be ok for an FFE. But if there's additional > tripleoclient related changes, I wouldn't necessarily feel comfortable > with these unless we have a complete list of work. Just as a side > note, I'm not sure putting these in tripleo-common is going to be the > ideal place for this. > Thanks Alex. For Rocky, if we can ship the playbooks with relevant docs we should be good. We will integrated with client in Stein release with restore logic included. Regarding putting tripleo-common, we're open to suggestions. I think Dan just submitted the review so we can get some eyes on the playbooks. Where do you suggest is better place for these instead? > > Thanks, > -Alex > > > Regards, > > Martin > > > > [1] https://review.openstack.org/#/c/582453/ > > > >> > >> +--> Next PTG will be in Denver, please propose topics: > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > >> +--> Multiple squads are currently brainstorming a framework to provide > >> validations pre/post upgrades - stay in touch! > >> > >> +------------------------------+ > >> | Continuous Integration | > >> +------------------------------+ > >> > >> +--> Sprint theme: migration to Zuul v3 (More on > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > >> +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI > >> issue. > >> +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day > on > >> Ocata. > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting > >> > >> +-------------+ > >> | Upgrades | > >> +-------------+ > >> > >> +--> Good progress on major upgrades workflow, need reviews! > >> +--> More: > https://etherpad.openstack.org/p/tripleo-upgrade-squad-status > >> > >> +---------------+ > >> | Containers | > >> +---------------+ > >> > >> +--> We switched python-tripleoclient to deploy containerized undercloud > >> by default! > >> +--> Image prepare via workflow is still work in progress. > >> +--> More: > >> https://etherpad.openstack.org/p/tripleo-containers-squad-status > >> > >> +----------------------+ > >> | config-download | > >> +----------------------+ > >> > >> +--> UI integration is almost done (need review) > >> +--> Bug with failure listing is being fixed: > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > >> +--> More: > >> https://etherpad.openstack.org/p/tripleo-config-download-squad-status > >> > >> +--------------+ > >> | Integration | > >> +--------------+ > >> > >> +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK > >> etc: > >> > https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged) > >> (need reviews). > >> +--> More: > >> https://etherpad.openstack.org/p/tripleo-integration-squad-status > >> > >> +---------+ > >> | UI/CLI | > >> +---------+ > >> > >> +--> Good progress on network configuration via UI > >> +--> Config-download patches are being reviewed and a lot of testing is > >> going on. > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status > >> > >> +---------------+ > >> | Validations | > >> +---------------+ > >> > >> +--> Working on OpenShift validations, need reviews. > >> +--> More: > >> https://etherpad.openstack.org/p/tripleo-validations-squad-status > >> > >> +---------------+ > >> | Networking | > >> +---------------+ > >> > >> +--> No updates this week. > >> +--> More: > >> https://etherpad.openstack.org/p/tripleo-networking-squad-status > >> > >> +--------------+ > >> | Workflows | > >> +--------------+ > >> > >> +--> No updates this week. > >> +--> More: > https://etherpad.openstack.org/p/tripleo-workflows-squad-status > >> > >> +-----------+ > >> | Security | > >> +-----------+ > >> > >> +--> Working on Secrets management and Limit TripleO users efforts > >> +--> More: https://etherpad.openstack.org/p/tripleo-security-squad > >> > >> +------------+ > >> | Owl fact | > >> +------------+ > >> Elf owls live in a cacti. They are the smallest owls, and live in the > >> southwestern United States and Mexico. It will sometimes make its home > in > >> the giant saguaro cactus, nesting in holes made by other animals. > However, > >> the elf owl isn’t picky and will also live in trees or on telephone > poles. > >> > >> Source: > >> http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls > >> > >> Thank you all for reading and stay tuned! > >> -- > >> Your fellow reporter, Emilien Macchi > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers, ~ Prad -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Pretorius at rackspace.co.uk Mon Jul 30 15:59:26 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Mon, 30 Jul 2018 15:59:26 +0000 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: <58ff-5b5ec980-31-29cd3a00@223498964> References: <58ff-5b5ec980-31-29cd3a00@223498964> Message-ID: >On 7/30/18, 9:19 AM, "jean-philippe at evrard.me" wrote: > > I'd like to propose Jonathan Rosser (jrosser) as core reviewer for OpenStack-Ansible. > The BBC team [1] has been very active recently across the board, but worked heavily in our ops repo, making sure the experience is complete for operators. I most certainly welcome this. Jonathan (and his team) are insightful and provide very valuable operator input and they're always ready to help when they can. +2 from me ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From openstack at nemebean.com Mon Jul 30 16:06:10 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 30 Jul 2018 11:06:10 -0500 Subject: [openstack-dev] [oslo] PTL candidacy Message-ID: You can find my statement at https://review.openstack.org/#/c/587096/1/candidates/stein/Oslo/openstack%2540nemebean.com That's certainly not an exhaustive list of what I plan to do next cycle, but given the size of our team I thought my time was better spent doing those things than writing a flowery campaign speech that nobody would ever read. ;-) -Ben From mnaser at vexxhost.com Mon Jul 30 16:08:31 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 30 Jul 2018 12:08:31 -0400 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: References: <58ff-5b5ec980-31-29cd3a00@223498964> Message-ID: On Mon, Jul 30, 2018 at 11:59 AM, Jesse Pretorius wrote: >>On 7/30/18, 9:19 AM, "jean-philippe at evrard.me" wrote: >> >> I'd like to propose Jonathan Rosser (jrosser) as core reviewer for OpenStack-Ansible. >> The BBC team [1] has been very active recently across the board, but worked heavily in our ops repo, making sure the experience is complete for operators. > > I most certainly welcome this. Jonathan (and his team) are insightful and provide very valuable operator input and they're always ready to help when they can. +2 from me > > I echo those thoughts, +2. :) > > > ________________________________ > Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From guilhermesteinmuller at gmail.com Mon Jul 30 16:21:32 2018 From: guilhermesteinmuller at gmail.com (=?UTF-8?Q?Guilherme_Steinm=C3=BCller?=) Date: Mon, 30 Jul 2018 13:21:32 -0300 Subject: [openstack-dev] [openstack-ansible] Proposing Jonathan Rosser as core reviewer In-Reply-To: References: <58ff-5b5ec980-31-29cd3a00@223498964> Message-ID: +1 nice guy to be a core! On Mon, Jul 30, 2018, 13:08 Mohammed Naser wrote: > On Mon, Jul 30, 2018 at 11:59 AM, Jesse Pretorius > wrote: > >>On 7/30/18, 9:19 AM, "jean-philippe at evrard.me" > wrote: > >> > >> I'd like to propose Jonathan Rosser (jrosser) as core reviewer for > OpenStack-Ansible. > >> The BBC team [1] has been very active recently across the board, but > worked heavily in our ops repo, making sure the experience is complete for > operators. > > > > I most certainly welcome this. Jonathan (and his team) are insightful > and provide very valuable operator input and they're always ready to help > when they can. +2 from me > > > > > I echo those thoughts, +2. :) > > > > > > ________________________________ > > Rackspace Limited is a company registered in England & Wales (company > registered number 03897010) whose registered office is at 5 Millington > Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy > can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail > message may contain confidential or privileged information intended for the > recipient. Any dissemination, distribution or copying of the enclosed > material is prohibited. If you receive this transmission in error, please > notify us immediately by e-mail at abuse at rackspace.com and delete the > original message. Your cooperation is appreciated. > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Mon Jul 30 16:41:14 2018 From: remo at rm.ht (Remo Mattei) Date: Mon, 30 Jul 2018 09:41:14 -0700 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: References: Message-ID: <2896B456-87F1-4F54-A4E7-BD06F2CCECF2@rm.ht> Do you have a timeout set? > On Jul 30, 2018, at 07:48, Samuel Monderer wrote: > > Hi, > > I'm trying to deploy a small environment with one controller and one compute but i get a timeout with no specific information in the logs > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_IN_PROGRESS state changed > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_COMPLETE state changed > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack UPDATE cancelled > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack CREATE cancelled > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "Controller" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE cancelled > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack CREATE cancelled > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED resources[0]: Stack CREATE cancelled > > Stack overcloud CREATE_FAILED > > overcloud.ComputeGammaV3.0: > resource_type: OS::TripleO::ComputeGammaV3 > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 > status: CREATE_FAILED > status_reason: | > resources[0]: Stack CREATE cancelled > overcloud.Controller.0: > resource_type: OS::TripleO::Controller > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 > status: CREATE_FAILED > status_reason: | > resources[0]: Stack CREATE cancelled > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > Heat Stack create failed. > Heat Stack create failed. > (undercloud) [stack at staging-director ~]$ > > It seems that it wasn't able to configure the OVS bridges > > (undercloud) [stack at staging-director ~]$ openstack software deployment show 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > +---------------+--------------------------------------------------------+ > | Field | Value | > +---------------+--------------------------------------------------------+ > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 | > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b | > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f | > | creation_time | 2018-07-30T13:19:44Z | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > | action | CREATE | > +---------------+--------------------------------------------------------+ > (undercloud) [stack at staging-director ~]$ openstack software deployment show a297e8ae-f4c9-41b0-938f-c51f9fe23843 > +---------------+--------------------------------------------------------+ > | Field | Value | > +---------------+--------------------------------------------------------+ > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 | > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 | > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f | > | creation_time | 2018-07-30T13:17:29Z | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > | action | CREATE | > +---------------+--------------------------------------------------------+ > (undercloud) [stack at staging-director ~]$ > > Regards, > Samuel > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From smonderer at vasonanetworks.com Mon Jul 30 16:46:00 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Mon, 30 Jul 2018 19:46:00 +0300 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: <2896B456-87F1-4F54-A4E7-BD06F2CCECF2@rm.ht> References: <2896B456-87F1-4F54-A4E7-BD06F2CCECF2@rm.ht> Message-ID: Yes I tried eith 60 and 120 On Mon, Jul 30, 2018, 19:42 Remo Mattei wrote: > Do you have a timeout set? > > > On Jul 30, 2018, at 07:48, Samuel Monderer > wrote: > > > > Hi, > > > > I'm trying to deploy a small environment with one controller and one > compute but i get a timeout with no specific information in the logs > > > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > CREATE_IN_PROGRESS state changed > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > CREATE_COMPLETE state changed > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE > aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack > UPDATE cancelled > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack > CREATE cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE > aborted (Task create from ResourceGroup "Controller" Stack "overcloud" > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE > cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack > CREATE cancelled > > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED > resources[0]: Stack CREATE cancelled > > > > Stack overcloud CREATE_FAILED > > > > overcloud.ComputeGammaV3.0: > > resource_type: OS::TripleO::ComputeGammaV3 > > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > overcloud.Controller.0: > > resource_type: OS::TripleO::Controller > > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Heat Stack create failed. > > Heat Stack create failed. > > (undercloud) [stack at staging-director ~]$ > > > > It seems that it wasn't able to configure the OVS bridges > > > > (undercloud) [stack at staging-director ~]$ openstack software deployment > show 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > > > +---------------+--------------------------------------------------------+ > > | Field | Value > | > > > +---------------+--------------------------------------------------------+ > > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > | > > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b > | > > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f > | > > | creation_time | 2018-07-30T13:19:44Z > | > > | updated_time | > | > > | status | IN_PROGRESS > | > > | status_reason | Deploy data available > | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} > | > > | action | CREATE > | > > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ openstack software deployment > show a297e8ae-f4c9-41b0-938f-c51f9fe23843 > > > +---------------+--------------------------------------------------------+ > > | Field | Value > | > > > +---------------+--------------------------------------------------------+ > > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 > | > > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 > | > > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f > | > > | creation_time | 2018-07-30T13:17:29Z > | > > | updated_time | > | > > | status | IN_PROGRESS > | > > | status_reason | Deploy data available > | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} > | > > | action | CREATE > | > > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ > > > > Regards, > > Samuel > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Mon Jul 30 16:51:14 2018 From: remo at rm.ht (Remo Mattei) Date: Mon, 30 Jul 2018 09:51:14 -0700 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: References: <2896B456-87F1-4F54-A4E7-BD06F2CCECF2@rm.ht> Message-ID: <56B14333-FEB6-41C0-9150-C6F536B535BB@rm.ht> Take it off and check :) > On Jul 30, 2018, at 09:46, Samuel Monderer wrote: > > Yes > I tried eith 60 and 120 > > On Mon, Jul 30, 2018, 19:42 Remo Mattei > wrote: > Do you have a timeout set? > > > On Jul 30, 2018, at 07:48, Samuel Monderer > wrote: > > > > Hi, > > > > I'm trying to deploy a small environment with one controller and one compute but i get a timeout with no specific information in the logs > > > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_IN_PROGRESS state changed > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: CREATE_COMPLETE state changed > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack UPDATE cancelled > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack CREATE cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE aborted (Task create from ResourceGroup "Controller" Stack "overcloud" [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack CREATE cancelled > > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED resources[0]: Stack CREATE cancelled > > > > Stack overcloud CREATE_FAILED > > > > overcloud.ComputeGammaV3.0: > > resource_type: OS::TripleO::ComputeGammaV3 > > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > overcloud.Controller.0: > > resource_type: OS::TripleO::Controller > > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Heat Stack create failed. > > Heat Stack create failed. > > (undercloud) [stack at staging-director ~]$ > > > > It seems that it wasn't able to configure the OVS bridges > > > > (undercloud) [stack at staging-director ~]$ openstack software deployment show 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > > +---------------+--------------------------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------------------------+ > > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 | > > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b | > > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f | > > | creation_time | 2018-07-30T13:19:44Z | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > > | action | CREATE | > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ openstack software deployment show a297e8ae-f4c9-41b0-938f-c51f9fe23843 > > +---------------+--------------------------------------------------------+ > > | Field | Value | > > +---------------+--------------------------------------------------------+ > > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 | > > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 | > > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f | > > | creation_time | 2018-07-30T13:17:29Z | > > | updated_time | | > > | status | IN_PROGRESS | > > | status_reason | Deploy data available | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > > | action | CREATE | > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ > > > > Regards, > > Samuel > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Jul 30 16:57:47 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 30 Jul 2018 11:57:47 -0500 Subject: [openstack-dev] [election][cinder] PTL Candidacy for Stein Release Message-ID: All, I have submitted a letter announcing my Cinder PTL Candidacy for the Stein Release Cycle here:  https://review.openstack.org/587139 I am including a copy of the letter below. Thank you for your continued support! Sincerely, Jay Bryant (jungleboyj) --------------------------------------------------------------------------------------------------------- All, This letter is to indicate my interest in continuing to serve as the Cinder PTL for the Stein release. This would be my third release as PTL and am happy to continue leading this great project. As I look back at the last two releases we have gotten some good things done.  The implementation of multi-attach has been a goal of Cinder for quite some time and I am glad to have been able to help make this happen.  We have also seen a change from trying to get new features implemented in Cinder to fixing bugs and making Cinder more user friendly and stable. During the Rocky release we have continued to improve our documentation, have worked on improving HA support and removed a lot of old code that did not need to remain in tree. I think this is an important evolution in the Cinder project: to focus on stability, usability and maintainability of our code. With that said I think the Stein release is going to be a challenging release for Cinder.  We have the following issues that are going to need to be addressed: * Migration to Storyboard * Dwindling review support * Making decisions around the Placement Service and Cinder * Continuing discussion around Cinder as a Stand-alone service I am hoping that I will be able to use my experience over the last two releases to move the above issues forward. Sincerely, Jay Bryant (jungleboyj) From aspiers at suse.com Mon Jul 30 17:01:09 2018 From: aspiers at suse.com (Adam Spiers) Date: Mon, 30 Jul 2018 18:01:09 +0100 Subject: [openstack-dev] [self-healing] [ptg] [monasca] PTG track schedule published In-Reply-To: References: Message-ID: <20180730170109.35dhd2slbj7ifz77@pacific.linksys.moosehall> Hi Witek, Thanks a lot for the offer! I've suggested to Thierry that Thursday morning probably works best, but if the room logistics don't permit that then we might have to accept your kind offer - I'll let you know. Cheers! Adam Bedyk, Witold wrote: >Hi Adam, > >if nothing else works, we could probably offer you half-day of Monasca slot on Monday or Tuesday afternoon. I'm afraid though that our room might be too small for you. > >Cheers >Witek > >>-----Original Message----- >>From: Thierry Carrez >>Sent: Freitag, 20. Juli 2018 18:46 >>To: Adam Spiers >>Cc: openstack-dev mailing list >>Subject: Re: [openstack-dev] [self-healing] [ptg] PTG track schedule >>published >> >>Adam Spiers wrote: >>>Apologies - I have had to change plans and leave on the Thursday >>>evening (old friend is getting married on Saturday morning).  Is there >>>any chance of swapping the self-healing slot with one of the others? >> >>It's tricky, as you asked to avoid conflicts with API SIG, Watcher, Monasca, >>Masakari, and Mistral... Which day would be best for you given the current >>schedule (assuming we don't move anything else as it's too late for that). >> >>-- >>Thierry Carrez (ttx) From openstack at nemebean.com Mon Jul 30 17:02:50 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 30 Jul 2018 12:02:50 -0500 Subject: [openstack-dev] [release] Github release tarballs broken Message-ID: According to https://bugs.launchpad.net/pbr/+bug/1742809 our github release tarballs don't actually work. It seems to be a github-specific issue because I was unable to reproduce the problem with a tarball from releases.openstack.org. My best guess is that github's release process differs from ours and doesn't work with our projects. I see a couple of options for fixing that. Either we figure out how to make Github's release process DTRT for our projects, or we figure out a way to override Github's release artifacts with our own. I'm not familiar enough with this to know which is a better (or even possible) option, so I'm sending this to solicit help. Thanks. -Ben From ifatafekn at gmail.com Mon Jul 30 17:05:35 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Mon, 30 Jul 2018 20:05:35 +0300 Subject: [openstack-dev] [Vitrage][PTL][Election] PTL candidacy for the Stein cycle Message-ID: Hi all, I would like to announce my candidacy to continue as Vitrage PTL for the Stein release. I’ve been the PTL of Vitrage since the day it started. It has been an amazing journey, both for Vitrage and for me personally. During the Rocky cycle, we have significantly enhanced the stability and usability of Vitrage and we added support for integrating Vitrage with several other projects. We also took an active part in the self-healing SIG discussions, as we believe Vitrage should hold an important role in every self-healing scenario. Among the most important tasks we did in Rocky were: * Fast-failover of vitrage-graph * Alarm history * Significant performance improvements * Kubernetes and Prometheus datasources In Stein, I would like to continue the effort around Virage usability and stability. In addition, we should integrate Vitrage with more projects, to give the user maximum visibility of the state of the system. On top of all the technical goals, I plan to continue the effort of enlarging our community. We are always looking for new contributors! I look forward to working with you all in the coming cycle. Thanks, Ifat. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Mon Jul 30 17:20:57 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 30 Jul 2018 19:20:57 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <5B5F295F.3090608@openstack.org> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> Message-ID: <1f5afd62cc3a9a8923586a404e707366@arcor.de> Hi Jimmy, from the GUI I'll get this link: https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center paper version are only in container whitepaper: https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack In general there is no group named papers kind regards Frank Am 2018-07-30 17:06, schrieb Jimmy McArthur: > Frank, > > We're getting a 404 when looking for the pot file on the Zanata API: > https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing > > As a result, we can't pull the po files. Any idea what might be > happening? > > Seeing the same thing with both papers... > > Thank you, > Jimmy > > Frank Kloeker wrote: >> Hi Jimmy, >> >> Korean and German version are now done on the new format. Can you >> check publishing? >> >> thx >> >> Frank >> >> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>> Hi all - >>> >>> Follow up on the Edge paper specifically: >>> https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >>> This is now available. As I mentioned on IRC this morning, it should >>> be VERY close to the PDF. Probably just needs a quick review. >>> >>> Let me know if I can assist with anything. >>> >>> Thank you to i18n team for all of your help!!! >>> >>> Cheers, >>> Jimmy >>> >>> Jimmy McArthur wrote: >>>> Ian raises some great points :) I'll try to address below... >>>> >>>> Ian Y. Choi wrote: >>>>> Hello, >>>>> >>>>> When I saw overall translation source strings on container >>>>> whitepaper, I would infer that new edge computing whitepaper >>>>> source strings would include HTML markup tags. >>>> One of the things I discussed with Ian and Frank in Vancouver is the >>>> expense of recreating PDFs with new translations. It's >>>> prohibitively expensive for the Foundation as it requires design >>>> resources which we just don't have. As a result, we created the >>>> Containers whitepaper in HTML, so that it could be easily updated >>>> w/o working with outside design contractors. I indicated that we >>>> would also be moving the Edge paper to HTML so that we could prevent >>>> that additional design resource cost. >>>>> On the other hand, the source strings of edge computing whitepaper >>>>> which I18n team previously translated do not include HTML markup >>>>> tags, since the source strings are based on just text format. >>>> The version that Akihiro put together was based on the Edge PDF, >>>> which we unfortunately didn't have the resources to implement in the >>>> same format. >>>>> >>>>> I really appreciate Akihiro's work on RST-based support on >>>>> publishing translated edge computing whitepapers, since >>>>> translators do not have to re-translate all the strings. >>>> I would like to second this. It took a lot of initiative to work on >>>> the RST-based translation. At the moment, it's just not usable for >>>> the reasons mentioned above. >>>>> On the other hand, it seems that I18n team needs to investigate on >>>>> translating similar strings of HTML-based edge computing whitepaper >>>>> source strings, which would discourage translators. >>>> Can you expand on this? I'm not entirely clear on why the HTML based >>>> translation is more difficult. >>>>> >>>>> That's my point of view on translating edge computing whitepaper. >>>>> >>>>> For translating container whitepaper, I want to further ask the >>>>> followings since *I18n-based tools* >>>>> would mean for translators that translators can test and publish >>>>> translated whitepapers locally: >>>>> >>>>> - How to build translated container whitepaper using original >>>>> Silverstripe-based repository? >>>>> https://docs.openstack.org/i18n/latest/tools.html describes well >>>>> how to build translated artifacts for RST-based OpenStack >>>>> repositories >>>>> but I could not find the way how to build translated container >>>>> whitepaper with translated resources on Zanata. >>>> This is a little tricky. It's possible to set up a local version of >>>> the OpenStack website >>>> (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md). >>>> However, we have to manually ingest the po files as they are >>>> completed and then push them out to production, so that wouldn't do >>>> much to help with your local build. I'm open to suggestions on how >>>> we can make this process easier for the i18n team. >>>> >>>> Thank you, >>>> Jimmy >>>>> >>>>> >>>>> With many thanks, >>>>> >>>>> /Ian >>>>> >>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>> Frank, >>>>>> >>>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>>> mentioned in a prior thread, the RST format that Akihiro worked >>>>>> did not work with the Zanata process that we have been using with >>>>>> our CMS. Additionally, the existing EDGE page is a PDF, so we had >>>>>> to build a new template to work with the new HTML whitepaper >>>>>> layout we created for the Containers paper. I outlined this in the >>>>>> thread " [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge >>>>>> Computing Whitepaper Translation" on 6/25/18 and mentioned we >>>>>> would be ready with the template around 7/13. >>>>>> >>>>>> We completed the work on the new whitepaper template and then put >>>>>> out the pot files on Zanata so we can get the po language files >>>>>> back. If this process is too cumbersome for the translation team, >>>>>> I'm open to discussion, but right now our entire translation >>>>>> process is based on the official OpenStack Docs translation >>>>>> process outlined by the i18n team: >>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>> >>>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>>> new translation type. If the i18n team is moving to this format >>>>>> instead, we can work on redoing our process. >>>>>> >>>>>> Please let me know if I can clarify further. >>>>>> >>>>>> Thanks, >>>>>> Jimmy >>>>>> >>>>>> Frank Kloeker wrote: >>>>>>> Hi Jimmy, >>>>>>> >>>>>>> permission was added for you and Sebastian. The Container >>>>>>> Whitepaper is on the Zanata frontpage now. But we removed Edge >>>>>>> Computing whitepaper last week because there is a kind of >>>>>>> displeasure in the team since the results of translation are >>>>>>> still not published beside Chinese version. It would be nice if >>>>>>> we have a commitment from the Foundation that results are >>>>>>> published in a specific timeframe. This includes your >>>>>>> requirements until the translation should be available. >>>>>>> >>>>>>> thx Frank >>>>>>> >>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>>> Sorry, I should have also added... we additionally need >>>>>>>> permissions so >>>>>>>> that we can add the a new version of the pot file to this >>>>>>>> project: >>>>>>>> https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >>>>>>>> Thanks! >>>>>>>> Jimmy >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Jimmy McArthur wrote: >>>>>>>>> Hi all - >>>>>>>>> >>>>>>>>> We have both of the current whitepapers up and available for >>>>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>>>> >>>>>>>>> https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >>>>>>>>> https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>>>> Thanks all! >>>>>>>>> Jimmy >>>>>>>> >>>>>>>> >>>>>>>> __________________________________________________________________________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: >>>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> >>>>>> __________________________________________________________________________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From e0ne at e0ne.info Mon Jul 30 17:47:18 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 30 Jul 2018 20:47:18 +0300 Subject: [openstack-dev] [election][horizon][ptl] PTL Candidacy for Stein Release Message-ID: Hello everyone, I would like to announce my candidacy for PTL of Horizon for Stein release. I had the honor to serve PTL role in the Rocky timeframe and I want to continue work with such a great team with PTL hat. During Rocky release cycle we worked mostly on the technical dept. We improved our CI with new jobs for Selenium tests. Some work is still in progress on getting integration tests working again. We're pretty close to reaching mox to mock migration community goal. I would like to work on these areas in Stein release too. In addition to this, as PTL I'm going to work on the following areas: * Finish work on mox to mock migrations and integration tests CI job. * More cross projects work should be done: cross-project CI jobs for plugins, work closely with projects teams to understand which features should be implemented in Horizon. * We need to help to contributors to get patches merged faster: work closely with new contributors, improve documentation to get it friendly both for JavaScript and Python developers. I'm looking forward to working together with all of you on Stein release and hope for your help with my efforts. Thank you, Ivan Kolodyazhny (e0ne) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Mon Jul 30 17:55:30 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 30 Jul 2018 13:55:30 -0400 Subject: [openstack-dev] [ironic] [FFE] Teach ironic about ppc64le boot requirements Message-ID: I would like to request a FFE for this RFE https://storyboard.openstack.org/#!/story/1749057 The implementation should be complete and is currently passing CI, but does need more reviews. I'd also like to test this locally ideally. pros --- - Improves ppc64le support cons --- - Bumps ironic-lib version for both IPA and Ironic risk --- - There are other deployment methods for ppc64le, including wholedisk and netboot. However, this feature is desired to improve parity between x86 and ppc64le for tripleo. The feature should not affect any current working deployment methods, but please review closely. Please let me know if you'd like more detail on this or have any questions! Thanks! -Mike  Turek From sebastian at tipit.net Mon Jul 30 18:09:46 2018 From: sebastian at tipit.net (Sebastian Marcet) Date: Mon, 30 Jul 2018 15:09:46 -0300 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: <1f5afd62cc3a9a8923586a404e707366@arcor.de> References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> Message-ID: Hi Frank, i was double checking pot file and realized that original pot missed some parts of the original paper (subsections of the paper) apologizes on that i just re uploaded an updated pot file with missing subsections regards On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: > Hi Jimmy, > > from the GUI I'll get this link: > https://translate.openstack.org/rest/file/translation/edge- > computing/pot-translation/de/po?docId=cloud-edge-computing- > beyond-the-data-center > > paper version are only in container whitepaper: > > https://translate.openstack.org/rest/file/translation/levera > ging-containers-openstack/paper/de/po?docId=leveraging- > containers-and-openstack > > In general there is no group named papers > > kind regards > > Frank > > > Am 2018-07-30 17:06, schrieb Jimmy McArthur: > >> Frank, >> >> We're getting a 404 when looking for the pot file on the Zanata API: >> https://translate.openstack.org/rest/file/translation/papers >> /papers/de/po?docId=edge-computing >> >> As a result, we can't pull the po files. Any idea what might be >> happening? >> >> Seeing the same thing with both papers... >> >> Thank you, >> Jimmy >> >> Frank Kloeker wrote: >> >>> Hi Jimmy, >>> >>> Korean and German version are now done on the new format. Can you check >>> publishing? >>> >>> thx >>> >>> Frank >>> >>> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >>> >>>> Hi all - >>>> >>>> Follow up on the Edge paper specifically: >>>> https://translate.openstack.org/iteration/view/edge-computin >>>> g/pot-translation/documents?dswid=-3192 This is now available. As I >>>> mentioned on IRC this morning, it should >>>> be VERY close to the PDF. Probably just needs a quick review. >>>> >>>> Let me know if I can assist with anything. >>>> >>>> Thank you to i18n team for all of your help!!! >>>> >>>> Cheers, >>>> Jimmy >>>> >>>> Jimmy McArthur wrote: >>>> >>>>> Ian raises some great points :) I'll try to address below... >>>>> >>>>> Ian Y. Choi wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> When I saw overall translation source strings on container >>>>>> whitepaper, I would infer that new edge computing whitepaper >>>>>> source strings would include HTML markup tags. >>>>>> >>>>> One of the things I discussed with Ian and Frank in Vancouver is the >>>>> expense of recreating PDFs with new translations. It's prohibitively >>>>> expensive for the Foundation as it requires design resources which we just >>>>> don't have. As a result, we created the Containers whitepaper in HTML, so >>>>> that it could be easily updated w/o working with outside design >>>>> contractors. I indicated that we would also be moving the Edge paper to >>>>> HTML so that we could prevent that additional design resource cost. >>>>> >>>>>> On the other hand, the source strings of edge computing whitepaper >>>>>> which I18n team previously translated do not include HTML markup >>>>>> tags, since the source strings are based on just text format. >>>>>> >>>>> The version that Akihiro put together was based on the Edge PDF, which >>>>> we unfortunately didn't have the resources to implement in the same format. >>>>> >>>>>> >>>>>> I really appreciate Akihiro's work on RST-based support on publishing >>>>>> translated edge computing whitepapers, since >>>>>> translators do not have to re-translate all the strings. >>>>>> >>>>> I would like to second this. It took a lot of initiative to work on >>>>> the RST-based translation. At the moment, it's just not usable for the >>>>> reasons mentioned above. >>>>> >>>>>> On the other hand, it seems that I18n team needs to investigate on >>>>>> translating similar strings of HTML-based edge computing whitepaper >>>>>> source strings, which would discourage translators. >>>>>> >>>>> Can you expand on this? I'm not entirely clear on why the HTML based >>>>> translation is more difficult. >>>>> >>>>>> >>>>>> That's my point of view on translating edge computing whitepaper. >>>>>> >>>>>> For translating container whitepaper, I want to further ask the >>>>>> followings since *I18n-based tools* >>>>>> would mean for translators that translators can test and publish >>>>>> translated whitepapers locally: >>>>>> >>>>>> - How to build translated container whitepaper using original >>>>>> Silverstripe-based repository? >>>>>> https://docs.openstack.org/i18n/latest/tools.html describes well >>>>>> how to build translated artifacts for RST-based OpenStack repositories >>>>>> but I could not find the way how to build translated container >>>>>> whitepaper with translated resources on Zanata. >>>>>> >>>>> This is a little tricky. It's possible to set up a local version of >>>>> the OpenStack website (https://github.com/OpenStackw >>>>> eb/openstack-org/blob/master/installation.md). However, we have to >>>>> manually ingest the po files as they are completed and then push them out >>>>> to production, so that wouldn't do much to help with your local build. I'm >>>>> open to suggestions on how we can make this process easier for the i18n >>>>> team. >>>>> >>>>> Thank you, >>>>> Jimmy >>>>> >>>>>> >>>>>> >>>>>> With many thanks, >>>>>> >>>>>> /Ian >>>>>> >>>>>> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >>>>>> >>>>>>> Frank, >>>>>>> >>>>>>> I'm sorry to hear about the displeasure around the Edge paper. As >>>>>>> mentioned in a prior thread, the RST format that Akihiro worked did not >>>>>>> work with the Zanata process that we have been using with our CMS. >>>>>>> Additionally, the existing EDGE page is a PDF, so we had to build a new >>>>>>> template to work with the new HTML whitepaper layout we created for the >>>>>>> Containers paper. I outlined this in the thread " [OpenStack-I18n] >>>>>>> [Edge-computing] [Openstack-sigs] Edge Computing Whitepaper Translation" on >>>>>>> 6/25/18 and mentioned we would be ready with the template around 7/13. >>>>>>> >>>>>>> We completed the work on the new whitepaper template and then put >>>>>>> out the pot files on Zanata so we can get the po language files back. If >>>>>>> this process is too cumbersome for the translation team, I'm open to >>>>>>> discussion, but right now our entire translation process is based on the >>>>>>> official OpenStack Docs translation process outlined by the i18n team: >>>>>>> https://docs.openstack.org/i18n/latest/en_GB/tools.html >>>>>>> >>>>>>> Again, I realize Akihiro put in some work on his own proposing the >>>>>>> new translation type. If the i18n team is moving to this format instead, we >>>>>>> can work on redoing our process. >>>>>>> >>>>>>> Please let me know if I can clarify further. >>>>>>> >>>>>>> Thanks, >>>>>>> Jimmy >>>>>>> >>>>>>> Frank Kloeker wrote: >>>>>>> >>>>>>>> Hi Jimmy, >>>>>>>> >>>>>>>> permission was added for you and Sebastian. The Container >>>>>>>> Whitepaper is on the Zanata frontpage now. But we removed Edge Computing >>>>>>>> whitepaper last week because there is a kind of displeasure in the team >>>>>>>> since the results of translation are still not published beside Chinese >>>>>>>> version. It would be nice if we have a commitment from the Foundation that >>>>>>>> results are published in a specific timeframe. This includes your >>>>>>>> requirements until the translation should be available. >>>>>>>> >>>>>>>> thx Frank >>>>>>>> >>>>>>>> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >>>>>>>> >>>>>>>>> Sorry, I should have also added... we additionally need >>>>>>>>> permissions so >>>>>>>>> that we can add the a new version of the pot file to this project: >>>>>>>>> https://translate.openstack.org/project/view/edge-computing/ >>>>>>>>> versions?dswid=-7835 Thanks! >>>>>>>>> Jimmy >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Jimmy McArthur wrote: >>>>>>>>> >>>>>>>>>> Hi all - >>>>>>>>>> >>>>>>>>>> We have both of the current whitepapers up and available for >>>>>>>>>> translation. Can we promote these on the Zanata homepage? >>>>>>>>>> >>>>>>>>>> https://translate.openstack.org/project/view/leveraging-cont >>>>>>>>>> ainers-openstack?dswid=5684 https://translate.openstack.or >>>>>>>>>> g/iteration/view/edge-computing/master/documents?dswid=5684 >>>>>>>>>> Thanks all! >>>>>>>>>> Jimmy >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> __________________________________________________________________________ >>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>>> enstack.org?subject:unsubscribe >>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> __________________________________________________________________________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Jul 30 18:13:28 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 30 Jul 2018 14:13:28 -0400 Subject: [openstack-dev] [nova] [placement] compute nodes use of placement In-Reply-To: References: Message-ID: On 07/26/2018 12:15 PM, Chris Dent wrote: > The `in_tree` calls happen from the report client method > `_get_providers_in_tree` which is called by > `_ensure_resource_provider` which can be called from multiple > places, but in this case is being called both times from > `get_provider_tree_and_ensure_root`, which is also responsible for > two of the inventory request. > > `get_provider_tree_and_ensure_root` is called by `_update` in the > resource tracker. > > `_update` is called by both `_init_compute_node` and > `_update_available_resource`. Every single period job iteration. > `_init_compute_node` is called from _update_available_resource` > itself. > > That accounts for the overall doubling. Actually, no. What accounts for the overall doubling is the fact that we no longer short-circuit return from _update() when there are no known changes in the node's resources. We *used* to do a quick check of whether the resource tracker's local cache of resources had been changed, and just exit _update() if no changes were detected. However, this patch modified that so that we *always* call to get inventory, even if the resource tracker noticed no changes in resources: https://github.com/openstack/nova/commit/e2a18a37190e4c7b7697a8811553d331e208182c The reason for that change is because the virt driver was tracking vGPU resources now and those vGPU resources were not tracked by the resource tracker's local cache of resources. Thus, we now always call the virt driver get_inventory() call (which morphed into the virt driver's update_provider_tree() call, but the change to update_provider_tree() didn't actually increase the number of calls to get inventories. It was the patch above that did that. Best, -jay From jillr at redhat.com Mon Jul 30 18:15:44 2018 From: jillr at redhat.com (Jill Rouleau) Date: Mon, 30 Jul 2018 11:15:44 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: References: Message-ID: <1532974544.5688.10.camel@redhat.com> On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote: > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz > wrote: > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr > > wrote: > > > > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > m> wrote: > > >> > > >> Your fellow reporter took a break from writing, but is now back > > on his > > >> pen. > > >> > > >> Welcome to the twenty-fifth edition of a weekly update in TripleO > > world! > > >> The goal is to provide a short reading (less than 5 minutes) to > > learn > > >> what's new this week. > > >> Any contributions and feedback are welcome. > > >> Link to the previous version: > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/1314 > > 26.html > > >> > > >> +---------------------------------+ > > >> | General announcements | > > >> +---------------------------------+ > > >> > > >> +--> Rocky Milestone 3 is next week. After, any feature code will > > require > > >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll > > enter a > > >> bug-fix only and stabilization period, until we can push the > > first stable > > >> version of Rocky. > > > > > > > > > Hey guys, > > > > > >   I would like to ask for FFE for backup and restore, where we > > ended up > > > deciding where is the best place for the code base for this > > project (please > > > see [1] for details). We believe that B&R support for overcloud > > control > > > plane will be good addition to a rocky release, but we started > > with this > > > initiative quite late indeed. The final result should the support > > in > > > openstack client, where "openstack overcloud (backup|restore)" > > would work as > > > a charm. Thanks in advance for considering this feature. > > > > > > > Was there a blueprint/spec for this effort?  Additionally do we have > > a > > list of the outstanding work required for this? If it's just these > > two > > playbooks, it might be ok for an FFE. But if there's additional > > tripleoclient related changes, I wouldn't necessarily feel > > comfortable > > with these unless we have a complete list of work.  Just as a side > > note, I'm not sure putting these in tripleo-common is going to be > > the > > ideal place for this. Was it this review? https://review.openstack.org/#/c/582453/ For Stein we'll have an ansible role[0] and playbook repo[1] where these types of tasks should live. [0] https://github.com/openstack/ansible-role-openstack-operations  [1] https://review.openstack.org/#/c/583415/ > > Thanks Alex. For Rocky, if we can ship the playbooks with relevant > docs we should be good. We will integrated with client in Stein > release with restore logic included. Regarding putting tripleo-common, > we're open to suggestions. I think Dan just submitted the review so we > can get some eyes on the playbooks. Where do you suggest is better > place for these instead? >   > > > > Thanks, > > -Alex > > > > > Regards, > > > Martin > > > > > > [1] https://review.openstack.org/#/c/582453/ > > > > > >> > > >> +--> Next PTG will be in Denver, please propose topics: > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > > >> +--> Multiple squads are currently brainstorming a framework to > > provide > > >> validations pre/post upgrades - stay in touch! > > >> > > >> +------------------------------+ > > >> | Continuous Integration | > > >> +------------------------------+ > > >> > > >> +--> Sprint theme: migration to Zuul v3 (More on > > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > > >> +--> Sagi is the rover and Chandan is the ruck. Please tell them > > any CI > > >> issue. > > >> +--> Promotion on master is 4 days, 0 days on Queens and Pike and > > 1 day on > > >> Ocata. > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meet > > ing > > >> > > >> +-------------+ > > >> | Upgrades | > > >> +-------------+ > > >> > > >> +--> Good progress on major upgrades workflow, need reviews! > > >> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad > > -status > > >> > > >> +---------------+ > > >> | Containers | > > >> +---------------+ > > >> > > >> +--> We switched python-tripleoclient to deploy containerized > > undercloud > > >> by default! > > >> +--> Image prepare via workflow is still work in progress. > > >> +--> More: > > >> https://etherpad.openstack.org/p/tripleo-containers-squad-status > > >> > > >> +----------------------+ > > >> | config-download | > > >> +----------------------+ > > >> > > >> +--> UI integration is almost done (need review) > > >> +--> Bug with failure listing is being fixed: > > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > > >> +--> More: > > >> https://etherpad.openstack.org/p/tripleo-config-download-squad-st > > atus > > >> > > >> +--------------+ > > >> | Integration | > > >> +--------------+ > > >> > > >> +--> We're enabling decoupled deployment plans e.g for OpenShift, > > DPDK > > >> etc: > > >> https://review.openstack.org/#/q/topic:alternate_plans+(status:op > > en+OR+status:merged) > > >> (need reviews). > > >> +--> More: > > >> https://etherpad.openstack.org/p/tripleo-integration-squad-status > > >> > > >> +---------+ > > >> | UI/CLI | > > >> +---------+ > > >> > > >> +--> Good progress on network configuration via UI > > >> +--> Config-download patches are being reviewed and a lot of > > testing is > > >> going on. > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad- > > status > > >> > > >> +---------------+ > > >> | Validations | > > >> +---------------+ > > >> > > >> +--> Working on OpenShift validations, need reviews. > > >> +--> More: > > >> https://etherpad.openstack.org/p/tripleo-validations-squad-status > > >> > > >> +---------------+ > > >> | Networking | > > >> +---------------+ > > >> > > >> +--> No updates this week. > > >> +--> More: > > >> https://etherpad.openstack.org/p/tripleo-networking-squad-status > > >> > > >> +--------------+ > > >> | Workflows | > > >> +--------------+ > > >> > > >> +--> No updates this week. > > >> +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squ > > ad-status > > >> > > >> +-----------+ > > >> | Security | > > >> +-----------+ > > >> > > >> +--> Working on Secrets management and Limit TripleO users > > efforts > > >> +--> More: https://etherpad.openstack.org/p/tripleo-security-squa > > d > > >> > > >> +------------+ > > >> | Owl fact  | > > >> +------------+ > > >> Elf owls live in a cacti. They are the smallest owls, and live in > > the > > >> southwestern United States and Mexico. It will sometimes make its > > home in > > >> the giant saguaro cactus, nesting in holes made by other animals. > > However, > > >> the elf owl isn’t picky and will also live in trees or on > > telephone poles. > > >> > > >> Source: > > >> http://mentalfloss.com/article/68473/15-mysterious-facts-about-ow > > ls > > >> > > >> Thank you all for reading and stay tuned! > > >> -- > > >> Your fellow reporter, Emilien Macchi > > >> > > >> > > ____________________________________________________________________ > > ______ > > >> OpenStack Development Mailing List (not for usage questions) > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > subscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >> > > > > > > > > ____________________________________________________________________ > > ______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > ubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > ____________________________________________________________________ > > ______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > scribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > --  > Cheers, > ~ Prad > ______________________________________________________________________ > ____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > ribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From cdent+os at anticdent.org Mon Jul 30 18:20:51 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 30 Jul 2018 19:20:51 +0100 (BST) Subject: [openstack-dev] [nova] [placement] compute nodes use of placement In-Reply-To: References: Message-ID: On Mon, 30 Jul 2018, Jay Pipes wrote: > On 07/26/2018 12:15 PM, Chris Dent wrote: >> The `in_tree` calls happen from the report client method >> `_get_providers_in_tree` which is called by >> `_ensure_resource_provider` which can be called from multiple >> places, but in this case is being called both times from >> `get_provider_tree_and_ensure_root`, which is also responsible for >> two of the inventory request. >> >> `get_provider_tree_and_ensure_root` is called by `_update` in the >> resource tracker. >> >> `_update` is called by both `_init_compute_node` and >> `_update_available_resource`. Every single period job iteration. >> `_init_compute_node` is called from _update_available_resource` >> itself. >> >> That accounts for the overall doubling. > > Actually, no. What accounts for the overall doubling is the fact that we no > longer short-circuit return from _update() when there are no known changes in > the node's resources. I think we're basically agreeing on this: I'm describing the current state of affairs, not attempting to describe why it is that way. Your insight helps to explain why. I have a set of change in progress which experiments with what happens if we don't call placement a second time in the _update call: https://review.openstack.org/#/c/587050/ Just to see what might blow up. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Mon Jul 30 18:55:27 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 30 Jul 2018 14:55:27 -0400 Subject: [openstack-dev] [nova] [placement] compute nodes use of placement In-Reply-To: References: Message-ID: <0484851a-50af-cf28-137f-c967cc2b9b44@gmail.com> ack. will review shortly. thanks, Chris. On 07/30/2018 02:20 PM, Chris Dent wrote: > On Mon, 30 Jul 2018, Jay Pipes wrote: > >> On 07/26/2018 12:15 PM, Chris Dent wrote: >>> The `in_tree` calls happen from the report client method >>> `_get_providers_in_tree` which is called by >>> `_ensure_resource_provider` which can be called from multiple >>> places, but in this case is being called both times from >>> `get_provider_tree_and_ensure_root`, which is also responsible for >>> two of the inventory request. >>> >>> `get_provider_tree_and_ensure_root` is called by `_update` in the >>> resource tracker. >>> >>> `_update` is called by both `_init_compute_node` and >>> `_update_available_resource`. Every single period job iteration. >>> `_init_compute_node` is called from _update_available_resource` >>> itself. >>> >>> That accounts for the overall doubling. >> >> Actually, no. What accounts for the overall doubling is the fact that >> we no longer short-circuit return from _update() when there are no >> known changes in the node's resources. > > I think we're basically agreeing on this: I'm describing the current > state of affairs, not attempting to describe why it is that way. > Your insight helps to explain why. > > I have a set of change in progress which experiments with what > happens if we don't call placement a second time in the _update > call: > >   https://review.openstack.org/#/c/587050/ > > Just to see what might blow up. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From MM9745 at att.com Mon Jul 30 18:58:11 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 30 Jul 2018 18:58:11 +0000 Subject: [openstack-dev] [openstack-helm] PTL non-candidacy for Stein Message-ID: <7C64A75C21BB8D43BD75BB18635E4D896C91714E@MOSTLS1MSGUSRFF.ITServices.sbc.com> Team, I have decided to bow out as PTL for OpenStack-Helm in the Stein cycle. My work focus is shifting to Airship engineering, so I think it makes sense to transition the PTL role to someone who can give OpenStack-Helm their full attention. That said, I plan to remain fully active as an OpenStack-Helm core reviewer and developer, and I'll look for opportunities to align Airship to OpenStack-Helm and leverage OSH as a consumer. It's been a privilege to serve such a great team as PTL -- I'm proud of our team and the work that we've accomplished during OpenStack-Helm's short time as an official project! Thank you, Matt McEuen -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jul 30 19:04:36 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 30 Jul 2018 14:04:36 -0500 Subject: [openstack-dev] [release] Github release tarballs broken In-Reply-To: References: Message-ID: <20180730190435.GA8045@sm-workstation> On Mon, Jul 30, 2018 at 12:02:50PM -0500, Ben Nemec wrote: > According to https://bugs.launchpad.net/pbr/+bug/1742809 our github release > tarballs don't actually work. It seems to be a github-specific issue > because I was unable to reproduce the problem with a tarball from > releases.openstack.org. > > My best guess is that github's release process differs from ours and doesn't > work with our projects. I see a couple of options for fixing that. Either > we figure out how to make Github's release process DTRT for our projects, or > we figure out a way to override Github's release artifacts with our own. > I'm not familiar enough with this to know which is a better (or even > possible) option, so I'm sending this to solicit help. > > Thanks. > > -Ben > >From what I understand, GitHub will provide zip and tar.gz links for all source whenever a tag is applied. It is a very basic operation and does not have any kind of logic for correctly packaging whatever that deliverable is. They even just label the links as "Source code". I am not sure if there is any way to disable this behavior. One option I see is we could link in the tag notes to the official tarballs.openstack.org location. We could also potentially look at using the GitHub API to upload a copy of those to the GitHub release page. But there's always a mirroring delay, and GitHub really is just a mirror of our git repos, so using this as a distribution point really isn't what we want. From openstack at nemebean.com Mon Jul 30 19:51:14 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 30 Jul 2018 14:51:14 -0500 Subject: [openstack-dev] [release] Github release tarballs broken In-Reply-To: <20180730190435.GA8045@sm-workstation> References: <20180730190435.GA8045@sm-workstation> Message-ID: <0cdd5a0e-b7ad-b107-0c34-fb07419b0e11@nemebean.com> On 07/30/2018 02:04 PM, Sean McGinnis wrote: > On Mon, Jul 30, 2018 at 12:02:50PM -0500, Ben Nemec wrote: >> According to https://bugs.launchpad.net/pbr/+bug/1742809 our github release >> tarballs don't actually work. It seems to be a github-specific issue >> because I was unable to reproduce the problem with a tarball from >> releases.openstack.org. >> >> My best guess is that github's release process differs from ours and doesn't >> work with our projects. I see a couple of options for fixing that. Either >> we figure out how to make Github's release process DTRT for our projects, or >> we figure out a way to override Github's release artifacts with our own. >> I'm not familiar enough with this to know which is a better (or even >> possible) option, so I'm sending this to solicit help. >> >> Thanks. >> >> -Ben >> > > From what I understand, GitHub will provide zip and tar.gz links for all source > whenever a tag is applied. It is a very basic operation and does not have any > kind of logic for correctly packaging whatever that deliverable is. > > They even just label the links as "Source code". > > I am not sure if there is any way to disable this behavior. One option I see is > we could link in the tag notes to the official tarballs.openstack.org location. > We could also potentially look at using the GitHub API to upload a copy of > those to the GitHub release page. But there's always a mirroring delay, and > GitHub really is just a mirror of our git repos, so using this as a > distribution point really isn't what we want. Yeah, I talked a bit more about this with Monty on IRC, and it turns out there is already an RFE for Github to hide releases that were auto-generated from tags: https://github.community/t5/How-to-use-Git-and-GitHub/Tag-without-release/m-p/7906 Apparently from the github side "releases" already aren't created unless the project does so explicitly, but they show all tags on the release tab anyway so the user-visible difference is pretty much nil. We decided to table this until we find out if Github is going to fix it for us. It doesn't make sense to do a bunch of work and then turn around and not need it because Github rationalized their UI while we were trying to work around it. From johnsomor at gmail.com Mon Jul 30 21:53:33 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 30 Jul 2018 14:53:33 -0700 Subject: [openstack-dev] [all][Election] [Octavia] Stein Octavia PTL candidacy Message-ID: My fellow OpenStack community, I would like to nominate myself for Octavia PTL for Stein. I am currently the PTL for the Rocky release series and would like to continue helping our team provide network load balancing services for OpenStack. In the Rocky release, we were able to add support for provider drivers, improve the user experience when using Barbican, listener timeouts, dashboard auto refresh, and creating members designated as "backup" members. Looking forward to Stein I expect the team to finish out some major new features, such as Active/Active load balancers and flavors. Thank you for your support of Octavia during Rocky and your consideration for Stein, Michael Johnson (johnsom) From melwittt at gmail.com Mon Jul 30 23:41:01 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 30 Jul 2018 16:41:01 -0700 Subject: [openstack-dev] [nova][ptl][election] PTL candidacy for the Stein cycle Message-ID: Hello Stackers, I'd like to announce my candidacy for PTL of Nova for the Stein cycle. I feel like Rocky has been a whirlwind of a cycle with a lot of active participation by developers, operators, and users. Thank you all for bearing with me for the past cycle as I have learned the ropes of being a PTL. We accomplished a lot in Rocky, with some highlights including: * Experimented with a new review process, "review runways" * Began using the new Neutron port binding API to minimize network downtime during live migrations * Completed the placement side of nested resource providers (Nova integration work still remains) * Volume-backed instances will no longer claim root_gb for new instances and existing instances will heal during move operations * Made progress on removing nova-network-specific REST APIs * Added a nova-manage command to purge archived shadow table data * Doing more pre-filtering in placement before we iterate over compute host candidates with FilterScheduler filters * Added the ability to boot instances with trusted virtual functions * Added the ability to disable a cell in cells v2 * Added a way to mitigate Meltdown/Spectre performance degradation via cpu flags Looking toward Stein, we have more work to do with integrating placement nested resource providers into Nova, implementing migration of flat resource providers => nested tree-based resource providers in placement, adding more resiliency in cells v2 for handling "down" or poorer performing cells, removing nova-network, and more to be discussed and prioritized at the PTG [1]. It would be my privilege to serve the Nova community for another cycle and if elected, I endeavor to do a better job using what I have learned during the Rocky cycle. I am always trying to improve, so please feel free to share your feedback with me. Thank you for your consideration. Best, -melanie [1] https://etherpad.openstack.org/p/nova-ptg-stein From adriant at catalyst.net.nz Tue Jul 31 01:09:27 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Tue, 31 Jul 2018 13:09:27 +1200 Subject: [openstack-dev] [adjutant] [PTL] [Election] PTL candidacy for the Stein cycle Message-ID: <7e7bbf7a-0f45-7da3-e62b-7059dfbc5bd7@catalyst.net.nz> Hello OpenStackers, I'm submitting myself as the PTL for the first cycle that Adjutant will be a part of OpenStack. As the main developer and project lead until now, I'm the best suited to continue leading the project at this time and finish the current work that is needed to bring it to a place where the wider community can embrace what Adjutant offers and better tweak it to their own needs. My focus for Stein will be continuing the refactor work that was started during Queens, and finishing the sub-project management APIs as well as project termination APIs. As such the planned work is: - bringing the codebase to a much nicer state while decoupling certain elements of our internals, and adding support for async task processing. - rework the config system or potentially adopt oslo.config if appropriate - introduce partial policy support rather than relying on hardcoded decorators. - rework notifications to be pluggable and how/when they are sent - finish the long planned support for sub-project management, and the project (and resource) termination logic I hope for it to be a productive cycle, with some of that work broken into smaller pieces that new contributors could potentially help with. :) Cheers, Adrian Turjak From zhaochao1984 at gmail.com Tue Jul 31 02:07:32 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Tue, 31 Jul 2018 10:07:32 +0800 Subject: [openstack-dev] [trove] Considering the transfter of the project leadership In-Reply-To: <20180730140741eucas1p2d9c133702b5c0c0fd2d96c5c53f71afa~GKrCj3OAf1572015720eucas1p2g@eucas1p2.samsung.com> References: <20180730140741eucas1p2d9c133702b5c0c0fd2d96c5c53f71afa~GKrCj3OAf1572015720eucas1p2g@eucas1p2.samsung.com> Message-ID: Dariusz Krol, That's great, thanks. Please submit your nomination asap, the deadline is today(2018-07-31 23:45:00 UTC). For detailed guidance about nomination, please refer to https://governance.openstack.org/election/. I think we need some work on https://review.openstack.org/#/c/586528/2, and we're still in the Feature Freeze, so we also need to wait for the stable branch created(this should be done in the next week). On Mon, Jul 30, 2018 at 10:07 PM, Dariusz Krol wrote: > Hello Zhao Chao, > > > after some internal discussion, I will do the nomination if you decided > not to nominate yourself. Thanks for letting know you will be still > available in the next release cycle. > > Regarding commits I would recommend to consider also > https://review.openstack.org/#/c/586528/2 . > > > Best, > > Dariusz Krol > > On 07/30/2018 03:26 AM, 赵超 wrote: > > Since the new folks are still so new - if this works for you - I would >> recommend continuing on as the official PTL for one more release, but >> with the >> understanding that you would just be around to answer questions and give >> advice >> to help the new team get up to speed. That should hopefully be a small >> time >> commitment for you while still easing that transition. >> >> Then hopefully by the T release it would not be an issue at all for >> someone >> else to step up as the new PTL. Or even if things progress well, you >> could step >> down as PTL at some point during the Stein cycle if someone is ready to >> take >> over for you. >> > > Sean, thanks a lot for these helpful suggestions. I thought about doing > it this way before writing this post, and this is also the reason I asked > the current active team members to nominate theselves. > > However, it's sad that the other active team members seems also busy on > other thing. So I think it may be better Dariusz and his team could do more > than us on the project in the next cycle. I believe they're experience on > the project , and all other experiences about the whole OpenStack > environment could be more familiar in the daily pariticipation of the > project. > > On the other hand, I can also understand the lack of time to be a PTL >> since it requires probably a lot of time to coordinate all the work. > > > Dariusz, no, the current team is really a small team, so in fact I didn't > need to do much coordination. The pain is that almost none of the current > active team member are not focusing Trove, so even thought all of us want > to do more progress in this cycle, we're not able to. This also the reason > all of us think it's great to have to team focusing on the project could > join. > > So, we don't have much time on the PTL election now, Dariusz, would you > please discuss with your team who will do the nomination. And then we'll > see if everything could work. We could also try to merge one the > trove-tempest-plugin patches(https://review.openstack.org/#/c/580763/ > could be merged first before we get the CI could test all the cases in the > repo, sadlly currently we cannot the other patches as they're cannot be > tested). > > However that patch is submitted by Krzysztof, though is authored by > Dariusz. I don't know whether this could count as an identifiied commit > when applying PTL nomination. > > And last, I want to repeat that, I'll still in the Trove delepoment for > quit a long time, so I will help the new PTL and new contributor on > everything I could. > > Thanks again for everyone who help me a lot in the last cycle, especially > Fan Zhang, zhanggang, wangyao, song.jian and Manoj Kumar. > > -- > To be free as in freedom. > > > > > > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13168 bytes Desc: not available URL: From lhinds at redhat.com Tue Jul 31 02:25:29 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 31 Jul 2018 09:25:29 +0700 Subject: [openstack-dev] [all][Election] Last days for PTL nomination In-Reply-To: <20180730141741.ri4ugsvvvq2csz2x@yuggoth.org> References: <20180730013519.GA4829@thor.bakeyournoodle.com> <20180730141741.ri4ugsvvvq2csz2x@yuggoth.org> Message-ID: On Mon, 30 Jul 2018, 21:19 Jeremy Stanley, wrote: > On 2018-07-30 15:23:57 +0700 (+0700), Luke Hinds wrote: > > Security is a SIG and no longer a project (changed as of rocky cycle). > > Technically it's still both at the moment, which is why I proposed > https://review.openstack.org/586896 yesterday (tried to give you a > heads up in IRC about that as well). A +1 from the current PTL of > record on that change would probably be a good idea. > I am on PTO for the next two weeks, is +1 in this email ok? I don't have my launchpad credentials with me to SSO login to Gerrit. -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Tue Jul 31 02:35:47 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 30 Jul 2018 19:35:47 -0700 Subject: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal Message-ID: <49db1d12-1fd3-93d1-a31e-8a2a5a35654d@intel.com> Hi Eric and all,     With recent discussions [1], we have convergence on how Power and other architectures can use Cyborg. Before I update the spec [2], I am setting down some key aspects of the updates, so that we are all aligned. The accelerator - instance attachment has two parts: * The connection between the accelerator and a host-visible attach handle, such as a PCI function or a mediated device UUID. We call this the Device Half of the attach. * The connection between the attach handle and the instance. We name this the Instance Half of the attach. I propose two different extensibility mechanisms: * Cyborg drivers deal with device-specific aspects, including discovery/enumeration of devices and handling the Device Half of the attach (preparing devices/accelerators for attach to an instance, post-attach cleanup (if any) after successful attach, releasing device/accelerator resources on instance termination or failed attach, etc.) * os-acc plugins deal with hypervisor/system/architecture-specific aspects, including handling the Instance Half of the attach (e.g. for libvirt with PCI, preparing the XML snippet to be included in the domain XML). When invoked by Nova compute to attach accelerator(s) to an instance, os-acc would call the Cyborg driver to prepare a VAN (Virtual Accelerator Nexus, which is a handle object for attaching an accelerator to an instance, similar to VIFs for networking). Such preparation may involve configuring the device in some way, including programming for FPGAs. This sets up a VAN object with the necessary data for the attach (e.g. PCI VF, Power DRC index, etc.). Then the os-acc would call a plugin to do the needful for that hypervisor, using that VAN. Finally the os-acc may call the Cyborg driver again to do any post-attach cleanup, if needed. A more detailed workflow is here: https://docs.google.com/drawings/d/1cX06edia_Pr7P5nOB08VsSMsgznyrz4Yy2u8nb596sU/edit?usp=sharing Thus, the drivers and plugins are expected to be complementary. For example, for 2 devices of types T1 and T2, there shall be 2 separate Cyborg drivers. Further, we would have separate plugins for, say, x86+KVM systems and Power systems. We could then have four different deployments -- T1 on x86+KVM, T2 on x86+KVM, T1 on Power, T2 on Power -- by suitable combinations of the drivers and plugins. It is possible that there may be scenarios where the separation of roles between the plugins and the drivers are not so clear-cut. That can be addressed by allowing the plugins to call into Cyborg drivers in the future and/or by other mechanisms. One secondary detail to note is that Nova compute calls os-acc per instance for all accelerators for that instance, not once for each accelerator. There are two reasons for that: * I think this is how Nova deals with os-vif [3]. * If some accelerators got allocated/configured, and the next accelerator configuration fails, a rollback needs to be done. This is better done in os-acc than Nova compute. Cyborg drivers are invoked both by the Cyborg agent (for discovery/enumeration) and by os-acc (for instance attach). Both shall use Stevedore to locate and load the drivers. A single Python module may implement both sets of interfaces, like this: +--------------+ +-------+ | Nova Compute | |Cyborg | +----+---------+ |Agent | | +---+---+ +----v---+ | | os-acc | | +----+---+ | | | | Cyborg driver | +----v----------------+------v-----------+ |UN/PLUG ACCELERATORS | DISCOVER | |FROM INSTANCES | ACCELERATORS | | | | |* can_handle() | * get_devices() | |* prepareVAN() | | |* postplug() | | |* unprepareVAN() | | +---------------------+------------------+ If there are no objections to the above, I will update the spec [2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-cyborg/%23openstack-cyborg.2018-07-30.log.html#t2018-07-30T16:25:41-2 [2] https://review.openstack.org/#/c/577438/ [3] https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1529 Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Tue Jul 31 07:44:44 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 31 Jul 2018 10:44:44 +0300 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: <56B14333-FEB6-41C0-9150-C6F536B535BB@rm.ht> References: <2896B456-87F1-4F54-A4E7-BD06F2CCECF2@rm.ht> <56B14333-FEB6-41C0-9150-C6F536B535BB@rm.ht> Message-ID: Removing it just made it longer to time out On Mon, Jul 30, 2018 at 7:51 PM, Remo Mattei wrote: > Take it off and check :) > > > > On Jul 30, 2018, at 09:46, Samuel Monderer > wrote: > > Yes > I tried eith 60 and 120 > > On Mon, Jul 30, 2018, 19:42 Remo Mattei wrote: > >> Do you have a timeout set? >> >> > On Jul 30, 2018, at 07:48, Samuel Monderer < >> smonderer at vasonanetworks.com> wrote: >> > >> > Hi, >> > >> > I'm trying to deploy a small environment with one controller and one >> compute but i get a timeout with no specific information in the logs >> > >> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: >> CREATE_IN_PROGRESS state changed >> > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: >> CREATE_COMPLETE state changed >> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE >> aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" >> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) >> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack >> UPDATE cancelled >> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out >> > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack >> CREATE cancelled >> > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE >> aborted (Task create from ResourceGroup "Controller" Stack "overcloud" >> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) >> > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out >> > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack >> UPDATE cancelled >> > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack >> CREATE cancelled >> > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED >> resources[0]: Stack CREATE cancelled >> > >> > Stack overcloud CREATE_FAILED >> > >> > overcloud.ComputeGammaV3.0: >> > resource_type: OS::TripleO::ComputeGammaV3 >> > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 >> > status: CREATE_FAILED >> > status_reason: | >> > resources[0]: Stack CREATE cancelled >> > overcloud.Controller.0: >> > resource_type: OS::TripleO::Controller >> > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 >> > status: CREATE_FAILED >> > status_reason: | >> > resources[0]: Stack CREATE cancelled >> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo >> > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo >> > Heat Stack create failed. >> > Heat Stack create failed. >> > (undercloud) [stack at staging-director ~]$ >> > >> > It seems that it wasn't able to configure the OVS bridges >> > >> > (undercloud) [stack at staging-director ~]$ openstack software deployment >> show 4b4fc54f-7912-40e2-8ad4-79f6179fe701 >> > +---------------+------------------------------------------- >> -------------+ >> > | Field | Value >> | >> > +---------------+------------------------------------------- >> -------------+ >> > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 >> | >> > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b >> | >> > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f >> | >> > | creation_time | 2018-07-30T13:19:44Z >> | >> > | updated_time | >> | >> > | status | IN_PROGRESS >> | >> > | status_reason | Deploy data available >> | >> > | input_values | {u'interface_name': u'nic1', u'bridge_name': >> u'br-ex'} | >> > | action | CREATE >> | >> > +---------------+------------------------------------------- >> -------------+ >> > (undercloud) [stack at staging-director ~]$ openstack software deployment >> show a297e8ae-f4c9-41b0-938f-c51f9fe23843 >> > +---------------+------------------------------------------- >> -------------+ >> > | Field | Value >> | >> > +---------------+------------------------------------------- >> -------------+ >> > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 >> | >> > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 >> | >> > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f >> | >> > | creation_time | 2018-07-30T13:17:29Z >> | >> > | updated_time | >> | >> > | status | IN_PROGRESS >> | >> > | status_reason | Deploy data available >> | >> > | input_values | {u'interface_name': u'nic1', u'bridge_name': >> u'br-ex'} | >> > | action | CREATE >> | >> > +---------------+------------------------------------------- >> -------------+ >> > (undercloud) [stack at staging-director ~]$ >> > >> > Regards, >> > Samuel >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jul 31 08:34:08 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 31 Jul 2018 10:34:08 +0200 Subject: [openstack-dev] [all][Election] Last days for PTL nomination In-Reply-To: References: <20180730013519.GA4829@thor.bakeyournoodle.com> <20180730141741.ri4ugsvvvq2csz2x@yuggoth.org> Message-ID: Luke Hinds wrote: > On Mon, 30 Jul 2018, 21:19 Jeremy Stanley, > wrote: > > On 2018-07-30 15:23:57 +0700 (+0700), Luke Hinds wrote: > > Security is a SIG and no longer a project (changed as of rocky > cycle). > > Technically it's still both at the moment, which is why I proposed > https://review.openstack.org/586896 yesterday (tried to give you a > heads up in IRC about that as well). A +1 from the current PTL of > record on that change would probably be a good idea. > > > I am on PTO for the next two weeks,  is +1 in this email ok? I don't > have my launchpad credentials with me to SSO login to Gerrit. Sure, I'll reference this email there. Thanks Luke, have a great PTO! -- Thierry Carrez (ttx) From forrest.zhao at intel.com Tue Jul 31 09:12:36 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 31 Jul 2018 09:12:36 +0000 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring Message-ID: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Hi Miguel, In your mail "PTL candidacy for the Stein cycle", it mentioned that "port mirroring for SR-IOV VF to VF mirroring" is within Stein goal. Could you tell where is the place to discuss the design for this feature? Mailing list, IRC channel, weekly meeting or others? I was involved in its spec review at https://review.openstack.org/#/c/574477/; but it has not been updated for a while. Thanks, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmagr at redhat.com Tue Jul 31 09:24:25 2018 From: mmagr at redhat.com (Martin Magr) Date: Tue, 31 Jul 2018 11:24:25 +0200 Subject: [openstack-dev] [tripleo][ci][metrics] Stucked in the middle of work because of RDO CI Message-ID: Greetings guys, it is pretty obvious that RDO CI jobs in TripleO projects are broken [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd patches ([1],[2],[3]) merged please even though the negative result of RDO CI jobs? Half of the patches for this feature is merged and the other half is stucked in this situation, were nobody reviews these patches, because there is red -1. Those patches passed Zuul jobs several times already and were manually tested too. Thanks in advance for consideration of this situation, Martin [0] https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure [1] https://review.openstack.org/#/c/578749 [2] https://review.openstack.org/#/c/576057/ [3] https://review.openstack.org/#/c/572312/ -- Martin Mágr Senior Software Engineer Red Hat Czech -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Tue Jul 31 09:39:54 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 31 Jul 2018 11:39:54 +0200 Subject: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation In-Reply-To: References: <5B4CB64F.4060602@openstack.org> <5B4CB93F.6070202@openstack.org> <5B4DF6B4.9030501@openstack.org> <7f053a98-718b-0470-9ffa-934f0e715a76@gmail.com> <5B4E132E.5050607@openstack.org> <5B50A476.8010606@openstack.org> <5B5F295F.3090608@openstack.org> <1f5afd62cc3a9a8923586a404e707366@arcor.de> Message-ID: <16e69b47c8b71bf6f920ab8f3df61928@arcor.de> Hi Sebastian, okay, it's translated now. In Edge whitepaper is the problem with XML-Parsing of the term AT&T. Don't know how to escape this. Maybe you will see the warning during import too. kind regards Frank Am 2018-07-30 20:09, schrieb Sebastian Marcet: > Hi Frank, > i was double checking pot file and realized that original pot missed > some parts of the original paper (subsections of the paper) apologizes > on that > i just re uploaded an updated pot file with missing subsections > > regards > > On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker wrote: > >> Hi Jimmy, >> >> from the GUI I'll get this link: >> > https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center >> [1] >> >> paper version are only in container whitepaper: >> >> > https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack >> [2] >> >> In general there is no group named papers >> >> kind regards >> >> Frank >> >> Am 2018-07-30 17:06, schrieb Jimmy McArthur: >> Frank, >> >> We're getting a 404 when looking for the pot file on the Zanata API: >> > https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing >> [3] >> >> As a result, we can't pull the po files. Any idea what might be >> happening? >> >> Seeing the same thing with both papers... >> >> Thank you, >> Jimmy >> >> Frank Kloeker wrote: >> Hi Jimmy, >> >> Korean and German version are now done on the new format. Can you >> check publishing? >> >> thx >> >> Frank >> >> Am 2018-07-19 16:47, schrieb Jimmy McArthur: >> Hi all - >> >> Follow up on the Edge paper specifically: >> > https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 >> [4] This is now available. As I mentioned on IRC this morning, it >> should >> be VERY close to the PDF. Probably just needs a quick review. >> >> Let me know if I can assist with anything. >> >> Thank you to i18n team for all of your help!!! >> >> Cheers, >> Jimmy >> >> Jimmy McArthur wrote: >> Ian raises some great points :) I'll try to address below... >> >> Ian Y. Choi wrote: >> Hello, >> >> When I saw overall translation source strings on container >> whitepaper, I would infer that new edge computing whitepaper >> source strings would include HTML markup tags. >> One of the things I discussed with Ian and Frank in Vancouver is >> the expense of recreating PDFs with new translations. It's >> prohibitively expensive for the Foundation as it requires design >> resources which we just don't have. As a result, we created the >> Containers whitepaper in HTML, so that it could be easily updated >> w/o working with outside design contractors. I indicated that we >> would also be moving the Edge paper to HTML so that we could prevent >> that additional design resource cost. >> On the other hand, the source strings of edge computing whitepaper >> which I18n team previously translated do not include HTML markup >> tags, since the source strings are based on just text format. >> The version that Akihiro put together was based on the Edge PDF, >> which we unfortunately didn't have the resources to implement in the >> same format. >> >> I really appreciate Akihiro's work on RST-based support on >> publishing translated edge computing whitepapers, since >> translators do not have to re-translate all the strings. >> I would like to second this. It took a lot of initiative to work on >> the RST-based translation. At the moment, it's just not usable for >> the reasons mentioned above. >> On the other hand, it seems that I18n team needs to investigate on >> translating similar strings of HTML-based edge computing whitepaper >> source strings, which would discourage translators. >> Can you expand on this? I'm not entirely clear on why the HTML >> based translation is more difficult. >> >> That's my point of view on translating edge computing whitepaper. >> >> For translating container whitepaper, I want to further ask the >> followings since *I18n-based tools* >> would mean for translators that translators can test and publish >> translated whitepapers locally: >> >> - How to build translated container whitepaper using original >> Silverstripe-based repository? >> https://docs.openstack.org/i18n/latest/tools.html [5] describes >> well how to build translated artifacts for RST-based OpenStack >> repositories >> but I could not find the way how to build translated container >> whitepaper with translated resources on Zanata. >> This is a little tricky. It's possible to set up a local version >> of the OpenStack website >> > (https://github.com/OpenStackweb/openstack-org/blob/master/installation.md >> [6]). However, we have to manually ingest the po files as they are >> completed and then push them out to production, so that wouldn't do >> much to help with your local build. I'm open to suggestions on how >> we can make this process easier for the i18n team. >> >> Thank you, >> Jimmy >> >> With many thanks, >> >> /Ian >> >> Jimmy McArthur wrote on 7/17/2018 11:01 PM: >> Frank, >> >> I'm sorry to hear about the displeasure around the Edge paper. As >> mentioned in a prior thread, the RST format that Akihiro worked did >> not work with the Zanata process that we have been using with our >> CMS. Additionally, the existing EDGE page is a PDF, so we had to >> build a new template to work with the new HTML whitepaper layout we >> created for the Containers paper. I outlined this in the thread " >> [OpenStack-I18n] [Edge-computing] [Openstack-sigs] Edge Computing >> Whitepaper Translation" on 6/25/18 and mentioned we would be ready >> with the template around 7/13. >> >> We completed the work on the new whitepaper template and then put >> out the pot files on Zanata so we can get the po language files >> back. If this process is too cumbersome for the translation team, >> I'm open to discussion, but right now our entire translation process >> is based on the official OpenStack Docs translation process outlined >> by the i18n team: >> https://docs.openstack.org/i18n/latest/en_GB/tools.html [7] >> >> Again, I realize Akihiro put in some work on his own proposing the >> new translation type. If the i18n team is moving to this format >> instead, we can work on redoing our process. >> >> Please let me know if I can clarify further. >> >> Thanks, >> Jimmy >> >> Frank Kloeker wrote: >> Hi Jimmy, >> >> permission was added for you and Sebastian. The Container Whitepaper >> is on the Zanata frontpage now. But we removed Edge Computing >> whitepaper last week because there is a kind of displeasure in the >> team since the results of translation are still not published beside >> Chinese version. It would be nice if we have a commitment from the >> Foundation that results are published in a specific timeframe. This >> includes your requirements until the translation should be >> available. >> >> thx Frank >> >> Am 2018-07-16 17:26, schrieb Jimmy McArthur: >> Sorry, I should have also added... we additionally need permissions >> so >> that we can add the a new version of the pot file to this project: >> > https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 >> [8] Thanks! >> Jimmy >> >> Jimmy McArthur wrote: >> Hi all - >> >> We have both of the current whitepapers up and available for >> translation. Can we promote these on the Zanata homepage? >> >> > https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 >> [9] >> > https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 >> [10] Thanks all! >> Jimmy >> >> > __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> [12] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [11] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [12] > > > > Links: > ------ > [1] > https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center > [2] > https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack > [3] > https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing > [4] > https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 > [5] https://docs.openstack.org/i18n/latest/tools.html > [6] > https://github.com/OpenStackweb/openstack-org/blob/master/installation.md > [7] https://docs.openstack.org/i18n/latest/en_GB/tools.html > [8] > https://translate.openstack.org/project/view/edge-computing/versions?dswid=-7835 > [9] > https://translate.openstack.org/project/view/leveraging-containers-openstack?dswid=5684 > [10] > https://translate.openstack.org/iteration/view/edge-computing/master/documents?dswid=5684 > [11] > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > [12] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Tue Jul 31 10:07:07 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 31 Jul 2018 12:07:07 +0200 Subject: [openstack-dev] [release] Stein: a slightly longer release cycle Message-ID: <14088ed1-e73a-04e3-daad-654b51104dd1@openstack.org> Hi everyone, As we approach the final stages of the Rocky release, it's time to start planning Stein work. The Stein release schedule is available here: https://releases.openstack.org/stein/schedule.html As discussed[1] during the Vancouver Board+TC+UC meeting, the Foundation will be holding the first PTG in 2019 immediately after the Denver summit in April, 2019 (in the same venue). Since we want to place the PTG close to the cycle start, this results in a slightly-longer release cycle, with the Stein release set to April 10, 2019. That makes Stein 4 weeks longer than Havana or Kilo, our longest cycles so far. That said, with the Berlin summit, Thanksgiving, the long end-of-year holiday break, and Chinese new year, there will be a lot of work time lost during this cycle (like during all of our Northern-hemisphere winter cycles), so the release management team doesn't really expect Stein to feel that much longer, or work planning to be significantly impacted. Cheers, [1] http://lists.openstack.org/pipermail/foundation/2018-June/002598.html -- Thierry Carrez (ttx) From tobias.urdin at crystone.com Tue Jul 31 10:13:46 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Tue, 31 Jul 2018 10:13:46 +0000 Subject: [openstack-dev] [puppet] [PTL] [Election] PTL candidacy for the Stein cycle Message-ID: <5309aa88b4d14abfb5f2ce6d8c9bbf45@mb01.staff.ognet.se> Hello Stackers, I'm submitting myself as PTL candidate for the Puppet OpenStack project. [0] I've been active in the OpenStack community since late 2014 early 2015 and have had a lot of focus on the Puppet OpenStack project since about 2016. I've been a core reviewer for about five months now and it's been really cool to be able to give something back to the community. We have had a lot of progress this cycle. * Remove a lot of deprecate parameters * Improved testing of Puppet 5 * Added Debian 9 support (Python 3 only) * Added Ubuntu 18.04 Bionic support * Fixed some bugs * Moved to more usage of the shared openstacklib resources * Added neutron-dynamic-routing support * Added horizon dashboard installation support * Changed keystone to use port 5000 and deprecated usage of port 35357 (still deploys both) I could ramble up a lot more in that list but I really think we've done a good job but we still have some major things moving forward that we'll have to work on. Here is some major things I think we'll need to work on or discuss. * Python 3 will be a big one, I know people are working on Fedora for testing here, but we also have Debian9 here which is python3-only so thanks to Thomas (zigo) we have somebody that has paved the way here. * Puppet 5 data types for parameters and removing validate_* functions is a big one which we also have an open blueprint and PoC for but will require a lot of interaction with the TripleO team. [1] [2] * CI stability and maintenance will be a reoccurring thing we'll need to focus on. * Puppet providers are usually slow due to CLI utilies, we need to work together to improve the performance of the CLI tooling or consider the move to API calls, this has been up before but there hasn't been anybody there that has sponsored such work. I want to really thank all of you for your huge amounts of work, all across the OpenStack board and the Puppet OpenStack team. Thank you for considering me. Best regards Tobias (tobasco @ IRC) [0] https://review.openstack.org/#/c/587372/ [1] https://review.openstack.org/#/c/568929/ [2] https://review.openstack.org/#/c/569566/ From pranabjyotiboruah at gmail.com Tue Jul 31 10:45:57 2018 From: pranabjyotiboruah at gmail.com (pranab boruah) Date: Tue, 31 Jul 2018 16:15:57 +0530 Subject: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication. Message-ID: >Hello Pranab, >Makes sense for me. This is really related to the OVS plugin that we >are maintaining. I guess you will have to add a new config option for >it as we have with 'network_device_mtu' and 'ovs_vsctl_timeout'. >Don't hesitate to add me as reviewer when patch is ready. Thanks Sahid. Here is the proposed patch: https://review.openstack.org/#/c/587378/ Please review. Regards, Pranab -------------- next part -------------- An HTML attachment was scrubbed... URL: From muroi.masahito at lab.ntt.co.jp Tue Jul 31 10:58:58 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 31 Jul 2018 19:58:58 +0900 Subject: [openstack-dev] [Blazar] PTL non candidacy Message-ID: Hi Blazar folks, I just want to announce that I'm not running the PTL for the Stein cycle. I have been running this position from the Ocata cycle when we revived the project. We've been done lots of successful activities in the last 4 cycles. I think it's time to change the position to someone else to move the Blazar project further forward. I'll still be around the project and try to make the Blazar project great. Thanks for lots of your supports. best regards, Masahito From cdent+os at anticdent.org Tue Jul 31 11:09:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 31 Jul 2018 12:09:23 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-31 Message-ID: HTML: https://anticdent.org/tc-report-18-31.html Welcome to this week's TC Report. Again a slow week. A small number of highlights to report. [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-26.log.html#t2018-07-26T15:03:57) there was some discussion of the health of the Trove project and how one of the issues that may have limited their success were struggles to achieve a [sane security model](https://review.openstack.org/#/c/438134/). That and other struggles led to lots of downstream forking and variance which complicates presenting a useful tool. [On Monday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-07-30.log.html) there was talk about the nature of the PTL role and whether it needs to change somewhat to help break down the silos between projects and curtail burnout. This was initially prompted by some concern that PTL nominations were lagging. As usual, there were many last minute nominations. The volume of work that continues to consolidate on individuals is concerning. We must figure out how to let some things drop. This is an area where the TC must demonstrate some leadership, but it's very unclear at this point how to change things. Based on [this message](http://lists.openstack.org/pipermail/openstack-dev/2018-July/132651.html) from Thierry on a slightly longer Stein cycle, the idea that the first PTG in 2019 is going to be co-located with the Summit is, if not definite, near as. There's more on that in the second paragraph of the [Vancouver Summit Joint Leadership Meeting Update](http://lists.openstack.org/pipermail/foundation/2018-June/002598.html). If you have issues that you would like the TC to discuss—or to discuss with the TC—at the [PTG coming in September](https://www.openstack.org/ptg), please add to the [planning etherpad](https://etherpad.openstack.org/p/tc-stein-ptg). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From prad at redhat.com Tue Jul 31 11:38:48 2018 From: prad at redhat.com (Pradeep Kilambi) Date: Tue, 31 Jul 2018 07:38:48 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition In-Reply-To: <1532974544.5688.10.camel@redhat.com> References: <1532974544.5688.10.camel@redhat.com> Message-ID: On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau wrote: > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote: > > > > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz > > wrote: > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr > > > wrote: > > > > > > > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi > > m> wrote: > > > >> > > > >> Your fellow reporter took a break from writing, but is now back > > > on his > > > >> pen. > > > >> > > > >> Welcome to the twenty-fifth edition of a weekly update in TripleO > > > world! > > > >> The goal is to provide a short reading (less than 5 minutes) to > > > learn > > > >> what's new this week. > > > >> Any contributions and feedback are welcome. > > > >> Link to the previous version: > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/1314 > > > 26.html > > > >> > > > >> +---------------------------------+ > > > >> | General announcements | > > > >> +---------------------------------+ > > > >> > > > >> +--> Rocky Milestone 3 is next week. After, any feature code will > > > require > > > >> Feature Freeze Exception (FFE), asked on the mailing-list. We'll > > > enter a > > > >> bug-fix only and stabilization period, until we can push the > > > first stable > > > >> version of Rocky. > > > > > > > > > > > > Hey guys, > > > > > > > > I would like to ask for FFE for backup and restore, where we > > > ended up > > > > deciding where is the best place for the code base for this > > > project (please > > > > see [1] for details). We believe that B&R support for overcloud > > > control > > > > plane will be good addition to a rocky release, but we started > > > with this > > > > initiative quite late indeed. The final result should the support > > > in > > > > openstack client, where "openstack overcloud (backup|restore)" > > > would work as > > > > a charm. Thanks in advance for considering this feature. > > > > > > > > > > Was there a blueprint/spec for this effort? Additionally do we have > > > a > > > list of the outstanding work required for this? If it's just these > > > two > > > playbooks, it might be ok for an FFE. But if there's additional > > > tripleoclient related changes, I wouldn't necessarily feel > > > comfortable > > > with these unless we have a complete list of work. Just as a side > > > note, I'm not sure putting these in tripleo-common is going to be > > > the > > > ideal place for this. > > Was it this review? https://review.openstack.org/#/c/582453/ > > For Stein we'll have an ansible role[0] and playbook repo[1] where these > types of tasks should live. > > [0] https://github.com/openstack/ansible-role-openstack-operations > [1] https://review.openstack.org/#/c/583415/ Thanks Jill! The issue is, we want to be able to backport this to Queens once merged. With the new repos you're mentioning would this be possible? If no, then this wont work for us unfortunately. > > > > > > Thanks Alex. For Rocky, if we can ship the playbooks with relevant > > docs we should be good. We will integrated with client in Stein > > release with restore logic included. Regarding putting tripleo-common, > > we're open to suggestions. I think Dan just submitted the review so we > > can get some eyes on the playbooks. Where do you suggest is better > > place for these instead? > > > > > > > > Thanks, > > > -Alex > > > > > > > Regards, > > > > Martin > > > > > > > > [1] https://review.openstack.org/#/c/582453/ > > > > > > > >> > > > >> +--> Next PTG will be in Denver, please propose topics: > > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein > > > >> +--> Multiple squads are currently brainstorming a framework to > > > provide > > > >> validations pre/post upgrades - stay in touch! > > > >> > > > >> +------------------------------+ > > > >> | Continuous Integration | > > > >> +------------------------------+ > > > >> > > > >> +--> Sprint theme: migration to Zuul v3 (More on > > > >> https://trello.com/c/vyWXcKOB/841-sprint-16-goals) > > > >> +--> Sagi is the rover and Chandan is the ruck. Please tell them > > > any CI > > > >> issue. > > > >> +--> Promotion on master is 4 days, 0 days on Queens and Pike and > > > 1 day on > > > >> Ocata. > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meet > > > ing > > > >> > > > >> +-------------+ > > > >> | Upgrades | > > > >> +-------------+ > > > >> > > > >> +--> Good progress on major upgrades workflow, need reviews! > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad > > > -status > > > >> > > > >> +---------------+ > > > >> | Containers | > > > >> +---------------+ > > > >> > > > >> +--> We switched python-tripleoclient to deploy containerized > > > undercloud > > > >> by default! > > > >> +--> Image prepare via workflow is still work in progress. > > > >> +--> More: > > > >> https://etherpad.openstack.org/p/tripleo-containers-squad-status > > > >> > > > >> +----------------------+ > > > >> | config-download | > > > >> +----------------------+ > > > >> > > > >> +--> UI integration is almost done (need review) > > > >> +--> Bug with failure listing is being fixed: > > > >> https://bugs.launchpad.net/tripleo/+bug/1779093 > > > >> +--> More: > > > >> https://etherpad.openstack.org/p/tripleo-config-download-squad-st > > > atus > > > >> > > > >> +--------------+ > > > >> | Integration | > > > >> +--------------+ > > > >> > > > >> +--> We're enabling decoupled deployment plans e.g for OpenShift, > > > DPDK > > > >> etc: > > > >> https://review.openstack.org/#/q/topic:alternate_plans+(status:op > > > en+OR+status:merged) > > > >> (need reviews). > > > >> +--> More: > > > >> https://etherpad.openstack.org/p/tripleo-integration-squad-status > > > >> > > > >> +---------+ > > > >> | UI/CLI | > > > >> +---------+ > > > >> > > > >> +--> Good progress on network configuration via UI > > > >> +--> Config-download patches are being reviewed and a lot of > > > testing is > > > >> going on. > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad- > > > status > > > >> > > > >> +---------------+ > > > >> | Validations | > > > >> +---------------+ > > > >> > > > >> +--> Working on OpenShift validations, need reviews. > > > >> +--> More: > > > >> https://etherpad.openstack.org/p/tripleo-validations-squad-status > > > >> > > > >> +---------------+ > > > >> | Networking | > > > >> +---------------+ > > > >> > > > >> +--> No updates this week. > > > >> +--> More: > > > >> https://etherpad.openstack.org/p/tripleo-networking-squad-status > > > >> > > > >> +--------------+ > > > >> | Workflows | > > > >> +--------------+ > > > >> > > > >> +--> No updates this week. > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squ > > > ad-status > > > >> > > > >> +-----------+ > > > >> | Security | > > > >> +-----------+ > > > >> > > > >> +--> Working on Secrets management and Limit TripleO users > > > efforts > > > >> +--> More: https://etherpad.openstack.org/p/tripleo-security-squa > > > d > > > >> > > > >> +------------+ > > > >> | Owl fact | > > > >> +------------+ > > > >> Elf owls live in a cacti. They are the smallest owls, and live in > > > the > > > >> southwestern United States and Mexico. It will sometimes make its > > > home in > > > >> the giant saguaro cactus, nesting in holes made by other animals. > > > However, > > > >> the elf owl isn’t picky and will also live in trees or on > > > telephone poles. > > > >> > > > >> Source: > > > >> http://mentalfloss.com/article/68473/15-mysterious-facts-about-ow > > > ls > > > >> > > > >> Thank you all for reading and stay tuned! > > > >> -- > > > >> Your fellow reporter, Emilien Macchi > > > >> > > > >> > > > ____________________________________________________________________ > > > ______ > > > >> OpenStack Development Mailing List (not for usage questions) > > > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > subscribe > > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > >> > > > > > > > > > > > ____________________________________________________________________ > > > ______ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > > ubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > ____________________________________________________________________ > > > ______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsub > > > scribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > -- > > Cheers, > > ~ Prad > > ______________________________________________________________________ > > ____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubsc > > ribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers, ~ Prad -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Tue Jul 31 11:40:28 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 31 Jul 2018 14:40:28 +0300 Subject: [openstack-dev] [tripleo][ci][metrics] Stucked in the middle of work because of RDO CI In-Reply-To: References: Message-ID: Hi, Martin I see master OVB jobs are passing now [1], please recheck. [1] http://cistatus.tripleo.org/ On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr wrote: > Greetings guys, > > it is pretty obvious that RDO CI jobs in TripleO projects are broken > [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd > patches ([1],[2],[3]) merged please even though the negative result of RDO > CI jobs? Half of the patches for this feature is merged and the other half > is stucked in this situation, were nobody reviews these patches, because > there is red -1. Those patches passed Zuul jobs several times already and > were manually tested too. > > Thanks in advance for consideration of this situation, > Martin > > [0] https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software- > factory-3rd-party-jobs-failing-due-to-instance-nodefailure > [1] https://review.openstack.org/#/c/578749 > [2] https://review.openstack.org/#/c/576057/ > [3] https://review.openstack.org/#/c/572312/ > > -- > Martin Mágr > Senior Software Engineer > Red Hat Czech > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Jul 31 13:51:10 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 31 Jul 2018 09:51:10 -0400 Subject: [openstack-dev] [ironic] [FFE] Teach ironic about ppc64le boot requirements In-Reply-To: References: Message-ID: Given that the ironic-lib version in question is already in upper-constraints, I think it may be fine. Realistically we do want people to be running the latest version of ironic-lib when deploying anyway. That being said, I'm +1 for this, however we need a second ironic-core to be willing to review this over the next few days. On Mon, Jul 30, 2018 at 1:55 PM, Michael Turek wrote: > I would like to request a FFE for this RFE > https://storyboard.openstack.org/#!/story/1749057 > > The implementation should be complete and is currently passing CI, but does > need more reviews. I'd also like to test this locally ideally. > > pros > --- > - Improves ppc64le support > > cons > --- > - Bumps ironic-lib version for both IPA and Ironic > > risk > --- > - There are other deployment methods for ppc64le, including wholedisk and > netboot. However, this feature is desired to improve parity between x86 and > ppc64le for tripleo. The feature should not affect any current working > deployment methods, but please review closely. > > Please let me know if you'd like more detail on this or have any questions! > Thanks! > > -Mike Turek > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dmellado at redhat.com Tue Jul 31 14:14:12 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Tue, 31 Jul 2018 16:14:12 +0200 Subject: [openstack-dev] [kuryr] SRIOV and Multi-Vif meeting Message-ID: <87d5cd9c-8182-80ed-de5c-df08c99e9b59@redhat.com> Hi everyone, As discussed in last meeting, we'll get to use next week's one to go over the multi-vif blueprint [1] and some discussions about its implementation and remaining patches. Feel free to join us at [2] Best! Daniel [1] https://blueprints.launchpad.net/kuryr-kubernetes/+spec/multi-vif-pods [2] https://bluejeans.com/4944951842 -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xC905561547B09777.asc Type: application/pgp-keys Size: 9561 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jim at jimrollenhagen.com Tue Jul 31 14:22:17 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 31 Jul 2018 10:22:17 -0400 Subject: [openstack-dev] [ironic] [FFE] Teach ironic about ppc64le boot requirements In-Reply-To: References: Message-ID: On Tue, Jul 31, 2018 at 9:51 AM, Julia Kreger wrote: > Given that the ironic-lib version in question is already in > upper-constraints, I think it may be fine. Realistically we do want > people to be running the latest version of ironic-lib when deploying > anyway. That being said, I'm +1 for this, however we need a second > ironic-core to be willing to review this over the next few days. > Happy to help, I'm +2 on the IPA patch. Ironic patch just needs some unit tests. // jim > On Mon, Jul 30, 2018 at 1:55 PM, Michael Turek > wrote: > > I would like to request a FFE for this RFE > > https://storyboard.openstack.org/#!/story/1749057 > > > > The implementation should be complete and is currently passing CI, but > does > > need more reviews. I'd also like to test this locally ideally. > > > > pros > > --- > > - Improves ppc64le support > > > > cons > > --- > > - Bumps ironic-lib version for both IPA and Ironic > > > > risk > > --- > > - There are other deployment methods for ppc64le, including wholedisk and > > netboot. However, this feature is desired to improve parity between x86 > and > > ppc64le for tripleo. The feature should not affect any current working > > deployment methods, but please review closely. > > > > Please let me know if you'd like more detail on this or have any > questions! > > Thanks! > > > > -Mike Turek > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jul 31 14:28:57 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 31 Jul 2018 16:28:57 +0200 Subject: [openstack-dev] [ptg] Post-lunch presentations in Denver Message-ID: <5d7d84a5-6612-f2b6-04c4-039bd2fb4a08@openstack.org> Hi everyone, At the last PTG in Dublin we introduced the concept of post-lunch presentations -- a 30-min segment during the second half of the lunch break during which we communicate and do Q&A on topics that are generally interesting to a crowd of contributors. In Dublin, you may remember we did a "Welcome to the PTG" session on Monday, a Zuul v3 session on Tuesday and an OpenStackSDK session on Wednesday. Due to the snow storm, we had to cancel the release management presentation on Thursday and the lightning talks scheduled for Friday. We do not *have to* fill every available slot -- but if we find content that is generally useful and can be consumed while people start their digestion process, then we can use one of those slots for that. Interesting topics include development tricks, code review etiquette, new libraries features you should adopt, upgrade horror stories... The content should generally fit within 20 min to leave room for Q&A. If you have ideas, please fill: https://etherpad.openstack.org/p/PTG4-postlunch In a few weeks the TC will review suggestions there and pick things that fit the bill. Cheers, -- Thierry Carrez (ttx) From smonderer at vasonanetworks.com Tue Jul 31 15:15:55 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 31 Jul 2018 18:15:55 +0300 Subject: [openstack-dev] [tripleo] overcloud deployment fails with during keystone configuration Message-ID: Hi, My overcloud deployment fails with the following error 2018-07-31 14:20:23Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 2018-07-31 14:20:24Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_FAILED Error: resources.ControllerDeployment_Step3.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-07-31 14:20:24Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerDeployment_Step3.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-07-31 14:20:25Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step3.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 2018-07-31 14:20:25Z [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step3.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 2 Stack overcloud CREATE_FAILED overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0: resource_type: OS::Heat::StructuredDeployment physical_resource_id: 69fd1d02-7e20-4d91-a7b4-552cdf4e42f2 status: CREATE_FAILED status_reason: | Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2 deploy_stdout: | ... "+ exit 1", "2018-07-31 17:20:19,292 INFO: 74435 -- Finished processing puppet configs for keystone_init_tasks", "2018-07-31 17:20:19,293 ERROR: 74434 -- ERROR configuring keystone_init_tasks" ] } to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/2fa9a52f-7e15-43fc-b67e-1ae358468790_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=9 changed=2 unreachable=0 failed=1 (truncated, view all with --long) deploy_stderr: | Not cleaning temporary directory /tmp/tripleoclient-D67O5V Not cleaning temporary directory /tmp/tripleoclient-D67O5V Heat Stack create failed. Heat Stack create failed. (undercloud) [stack at staging-director ~]$ In the director keystone log I get the following 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi [req-22ee40c6-6daa-428d-aa39-06a96a4d5d3d - - - - -] (pymysql.err.ProgrammingError) (1146, u"Table 'keystone.project' doesn't exist") [SQL: u'SELECT project.id AS project_id, project.name AS project_name, project.domain_id AS project_domain_id, project.description AS project_description, project.enabled AS project_enabled, project.extra AS project_extra, project.parent_ id AS project_parent_id, project.is_domain AS project_is_domain \nFROM project \nWHERE project.is_domain = true'] (Background on this error at: http://sqlalche.me/e/f405): ProgrammingError: (pymysql.err.Programmin gError) (1146, u"Table 'keystone.project' doesn't exist") [SQL: u'SELECT project.id AS project_id, project.name AS project_name, project.domain_id AS project_domain_id, project.description AS project_description, project.enabled AS project_enabled, project.extra AS project_extra, project.parent_id AS project_parent_id, project.is_domain AS project_is_domain \nFROM project \nWHERE project.is_domain = true'] (Background on t his error at: http://sqlalche.me/e/f405) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi Traceback (most recent call last): 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 226, in __call__ 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi result = method(req, **params) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 126, in wrapper 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return f(self, request, filters, **kwargs) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/resource/controllers.py", line 54, in list_domains 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi refs = PROVIDERS.resource_api.list_domains(hints=hints) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 116, in wrapped 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi __ret_val = __f(*args, **kwargs) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 68, in wrapper 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return f(self, *args, **kwargs) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/resource/core.py", line 735, in list_domains 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi projects = self.list_projects_acting_as_domain(hints) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 116, in wrapped 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi __ret_val = __f(*args, **kwargs) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/resource/core.py", line 854, in list_projects_acting_as_domain 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi hints or driver_hints.Hints()) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/resource/backends/sql.py", line 121, in list_projects_acting_as_domain 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return self.list_projects(hints) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/driver_hints.py", line 42, in wrapper 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return f(self, hints, *args, **kwargs) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/resource/backends/sql.py", line 85, in list_projects 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return [project_ref.to_dict() for project_ref in project_refs 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2878, in __iter__ 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return self._execute_and_instances(context) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2901, in _execute_and_instances 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi result = conn.execute(querycontext.statement, self._params) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return meth(self, multiparams, params) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi return connection._execute_clauseelement(self, multiparams, params) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi compiled_sql, distilled_params 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi context) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409, in _handle_dbapi_exception 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi util.raise_from_cause(newraise, exc_info) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi reraise(type(exception), exception, tb=exc_tb, cause=cause) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi context) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi cursor.execute(statement, parameters) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi result = self._query(query) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi conn.query(q) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in _read_query_result 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi result.read() 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in read 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi first_packet = self.connection._read_packet() 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1014, in _read_packet 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi packet.check_error() 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check_error 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi err.raise_mysql_exception(self._data) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_exception 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi raise errorclass(errno, errval) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'keystone.project' doesn't exist") [SQL: u'SELECT project.id AS project_id, project.name AS project_name, project.domain_id AS project_domain_id, project.description AS project_description, project.enabled AS project_enabled, project.extra AS project_extra, project.parent_id AS project_parent_id, project.is_domain AS project_is_domain \nFROM project \nWHERE project.is_domain = true'] (Background on this error at: http://sqlalche.me/e/f405) 2018-07-31 17:17:25.592 32 ERROR keystone.common.wsgi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Tue Jul 31 15:30:02 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 31 Jul 2018 17:30:02 +0200 Subject: [openstack-dev] [FFE][requirements] Bump ansible-runner u-c to 1.0.5 Message-ID: <7c8ac9a6-c99f-37fc-2288-c5346efd87d3@redhat.com> Hi all, I want to ask for FFE at this time to bump upper-constraint version of ansible-runner library from 1.0.4 to 1.0.5. Reason: ansible-runner 1.0.4 has an issue when running with currently used eventlet version because of missing select.poll() in eventlet [1]. The fix [2] is present in 1.0.5 ansible-runner version. Impact: networking-ansible project uses Neutron project and ansible-runner together and Neutron monkey patches code with eventlet. This fails all operations at networking-ansible. Statement: networking-ansible is the only project using ansible-runner in OpenStack world [3] so if we release Rocky with 1.0.4, the only project using it becomes useless. Bumping the version at this later stage will not affect any other project beside networking-ansible. Thanks for consideration. Jakub [1] https://github.com/ansible/ansible-runner/issues/90 [2] https://github.com/ansible/ansible-runner/commit/5608e786eb96408658604e75ef3db3c9a6b39308 [3] http://codesearch.openstack.org/?q=ansible-runner&i=nope&files=&repos= From myoung at redhat.com Tue Jul 31 15:29:38 2018 From: myoung at redhat.com (Matt Young) Date: Tue, 31 Jul 2018 11:29:38 -0400 Subject: [openstack-dev] [tripleo] TripleO CI squad status: Sprint 16 Message-ID: Greetings, The TripleO CI squad has recently completed Sprint 16 (5-July - 25-July). The Sprint was focused on the migration to Zuul v3. For a list of the completed items for the sprint please refer to the Epic card [1] and the task cards [2]. The Ruck & Rover roles this sprint were filled by Chandan Kumar and Sagi Shnaidman. Thanks to them for their efforts! Detailed notes concerning bugs filed and issues worked on are available in the etherpad [3] Thanks, Matt [1] https://trello.com/c/vyWXcKOB/143-ci-squad-sprint-16-goals [2] https://trello.com/b/BjcIIp0f/tripleo-and-rdo-ci-archive?menu=filter&filter=label:SPRINT%2016%20CI [3] https://review.rdoproject.org/etherpad/p/ruckrover-sprint16 -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoung at redhat.com Tue Jul 31 15:31:38 2018 From: myoung at redhat.com (Matt Young) Date: Tue, 31 Jul 2018 11:31:38 -0400 Subject: [openstack-dev] [tripleo] TripleO Tempest squad status: Sprint 16 Message-ID: Greetings, The TripleO Tempest squad has recently completed Sprint 16 (5-July - 25-July). This sprint was focused on tasks related to python-tempestconf and integration with the refstack client. Some of this work will continue in Sprint 17. For a list of the completed and items for the sprint please refer to the Epic card [1] and the task cards [2]. Thanks, Matt [1] https://trello.com/c/1v1dYRnP/144-sprint-16-closing-python-tempestconf-items-out [2] https://trello.com/b/BjcIIp0f/tripleo-and-rdo-ci-archive?menu=filter&filter=label:Sprint%2016%20Tempest -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Tue Jul 31 15:43:00 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Tue, 31 Jul 2018 16:43:00 +0100 Subject: [openstack-dev] [requirements][ffe] glance_store 0.26.1 (to be released) Message-ID: Hi all, We found a critical bug on glance_store release 0.26.0 (the final release for Rocky) preventing us to consume the multihash work for Glance Rocky release. We would like to include https://review.openstack.org/#/c/587098 containing the missing wrappers for the feature to work. And have requirement bump to 0.26.1 (once tagged) for Rocky release. The change is well isolated and self contained, not affecting the behavior apart from the consumption of the multihash feature. Best, Erno jokke Kuvaja From aschultz at redhat.com Tue Jul 31 16:03:55 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 31 Jul 2018 10:03:55 -0600 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: References: Message-ID: On Mon, Jul 30, 2018 at 8:48 AM, Samuel Monderer wrote: > Hi, > > I'm trying to deploy a small environment with one controller and one compute > but i get a timeout with no specific information in the logs > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > CREATE_IN_PROGRESS state changed > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > CREATE_COMPLETE state changed > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE > aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud" > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack UPDATE > cancelled > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack > CREATE cancelled > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE aborted > (Task create from ResourceGroup "Controller" Stack "overcloud" > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE > cancelled > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack CREATE > cancelled > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED > resources[0]: Stack CREATE cancelled > > Stack overcloud CREATE_FAILED > > overcloud.ComputeGammaV3.0: > resource_type: OS::TripleO::ComputeGammaV3 > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 > status: CREATE_FAILED > status_reason: | > resources[0]: Stack CREATE cancelled > overcloud.Controller.0: > resource_type: OS::TripleO::Controller > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 > status: CREATE_FAILED > status_reason: | > resources[0]: Stack CREATE cancelled > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > Heat Stack create failed. > Heat Stack create failed. > (undercloud) [stack at staging-director ~]$ > So this is a timeout likely caused by a bad network configuration so no response makes it back to Heat during the deployment. Heat never gets a response back so it just times out. You'll need to check your host network configuration and trouble shoot that. Thanks, -Alex > It seems that it wasn't able to configure the OVS bridges > > (undercloud) [stack at staging-director ~]$ openstack software deployment show > 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > +---------------+--------------------------------------------------------+ > | Field | Value | > +---------------+--------------------------------------------------------+ > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 | > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b | > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f | > | creation_time | 2018-07-30T13:19:44Z | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > | action | CREATE | > +---------------+--------------------------------------------------------+ > (undercloud) [stack at staging-director ~]$ openstack software deployment show > a297e8ae-f4c9-41b0-938f-c51f9fe23843 > +---------------+--------------------------------------------------------+ > | Field | Value | > +---------------+--------------------------------------------------------+ > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 | > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 | > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f | > | creation_time | 2018-07-30T13:17:29Z | > | updated_time | | > | status | IN_PROGRESS | > | status_reason | Deploy data available | > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} | > | action | CREATE | > +---------------+--------------------------------------------------------+ > (undercloud) [stack at staging-director ~]$ > > Regards, > Samuel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From myoung at redhat.com Tue Jul 31 16:07:34 2018 From: myoung at redhat.com (Matt Young) Date: Tue, 31 Jul 2018 12:07:34 -0400 Subject: [openstack-dev] [tripleo] TripleO CI+Tempest Squad Planning Summary: Sprint 17 Message-ID: >From the Halls of CI we greet thee! HAIL! # Overview The CI and Tempest squads have recently completed the planning phase of Sprint 17. The Sprint runs from 26-July thru 15-Aug. The sprint for both squads is now in the initial design phase (first week) of the sprint. The epic card and tasks for CI [1][2] and Tempest [3][4] squads are linked below. The Ruck and Rover for this sprint are Gabriele Cerami (panda) and Rafael Folco (rfolco). Their notes and current status is tracked in the etherpad for sprint 17 [5]. Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. # CI Squad * The main topic is focused on continuing the migration to zuul v3, including migrating from legacy bash to ansible tasks/playbooks * Between planned PTO and training, the team is running at reduced capacity this sprint. # Tempest Squad * clearing out technical debt related to python-tempestconf and refstack-client integration. * developing materials for a presentation on implementing a tempest plugin. * splitting out the validate-tempest role [6] to a discrete repository. * Due to planned PTO the squad is quite resource constrained this sprint. More detail on our team and process can be found in the spec [7] Thanks, Matt [1] https://trello.com/c/JikmHXSS/881-sprint-17-goals [2] https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter&filter=label:Sprint%2017%20CI [3] https://trello.com/c/yAnDETzJ/878-sprint-17-tempest-clear-technical-debts [4] https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter&filter=label:Sprint%2017%20Tempest [5] https://review.rdoproject.org/etherpad/p/ruckrover-sprint17 [6] https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/validate-tempest [7] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jul 31 16:30:24 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 31 Jul 2018 18:30:24 +0200 Subject: [openstack-dev] [ptg] Self-healing SIG meeting moved to Thursday morning Message-ID: <3ee9cf15-4587-7884-f8fd-b00ec22549fc@openstack.org> Hi! Quick heads-up: Following a request[1] from Adam Spiers (SIG lead), we modified the PTG schedule to move the Self-Healing SIG meeting from Friday (all day) to Thursday morning (only morning). You can see the resulting schedule at: https://www.openstack.org/ptg#tab_schedule Sorry for any inconvenience this may cause. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132392.html -- Thierry Carrez (ttx) From prometheanfire at gentoo.org Tue Jul 31 16:46:51 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 31 Jul 2018 11:46:51 -0500 Subject: [openstack-dev] [FFE][requirements] Bump ansible-runner u-c to 1.0.5 In-Reply-To: <7c8ac9a6-c99f-37fc-2288-c5346efd87d3@redhat.com> References: <7c8ac9a6-c99f-37fc-2288-c5346efd87d3@redhat.com> Message-ID: <20180731164651.vme55zyy3l64ibnh@gentoo.org> On 18-07-31 17:30:02, Jakub Libosvar wrote: > Hi all, > > I want to ask for FFE at this time to bump upper-constraint version of > ansible-runner library from 1.0.4 to 1.0.5. > > Reason: ansible-runner 1.0.4 has an issue when running with currently > used eventlet version because of missing select.poll() in eventlet [1]. > The fix [2] is present in 1.0.5 ansible-runner version. > > Impact: networking-ansible project uses Neutron project and > ansible-runner together and Neutron monkey patches code with eventlet. > This fails all operations at networking-ansible. > > Statement: networking-ansible is the only project using ansible-runner > in OpenStack world [3] so if we release Rocky with 1.0.4, the only > project using it becomes useless. Bumping the version at this later > stage will not affect any other project beside networking-ansible. > > [1] https://github.com/ansible/ansible-runner/issues/90 > [2] > https://github.com/ansible/ansible-runner/commit/5608e786eb96408658604e75ef3db3c9a6b39308 > [3] http://codesearch.openstack.org/?q=ansible-runner&i=nope&files=&repos= Looks good, you may want to update the minimum in networking-ansible, but lgtm otherwise. | networking-ansible | requirements.txt | 5 | ansible-runner>=1.0.3 # Apache-2.0 | -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Tue Jul 31 16:50:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 31 Jul 2018 11:50:09 -0500 Subject: [openstack-dev] [requirements][ffe] glance_store 0.26.1 (to be released) In-Reply-To: References: Message-ID: <20180731165009.72eseorr2shxjtnn@gentoo.org> On 18-07-31 16:43:00, Erno Kuvaja wrote: > Hi all, > > We found a critical bug on glance_store release 0.26.0 (the final > release for Rocky) preventing us to consume the multihash work for > Glance Rocky release. > > We would like to include https://review.openstack.org/#/c/587098 > containing the missing wrappers for the feature to work. And have > requirement bump to 0.26.1 (once tagged) for Rocky release. The change > is well isolated and self contained, not affecting the behavior apart > from the consumption of the multihash feature. > Looks fine, you may consider bumping the min defined in downstream consumers but lgtm otherwise. +----------------------------------------+----------------------------------------------------------+------+-----------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+----------------------------------------------------------+------+-----------------------------------+ | glance | requirements.txt | 49 | glance-store>=0.22.0 # Apache-2.0 | | glare | requirements.txt | 50 | glance-store>=0.22.0 # Apache-2.0 | | upstream-institute-virtual-environment | elements/upstream-training/static/tmp/requirements.txt | 64 | glance-store==0.22.0 | +----------------------------------------+----------------------------------------------------------+------+-----------------------------------+ -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Tue Jul 31 16:50:56 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 31 Jul 2018 12:50:56 -0400 Subject: [openstack-dev] [tripleo] The Weekly Owl - 27th Edition Message-ID: Welcome to the twenty-seventh edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-July/132448.html +---------------------------------+ | General announcements | +---------------------------------+ +--> We're preparing the first release candidate of TripleO Rocky, please focus on Critical / High bugs. +--> Reminder about PTG etherpad, feel free to propose topics: https://etherpad.openstack.org/p/tripleo-ptg-stein +------------------------------+ | Continuous Integration | +------------------------------+ +--> Sprint theme: migration to zuul v3, including migrating from legacy bash to ansible tasks/playbooks (More on https://trello.com/c/JikmHXSS/881-sprint-17-goals) +--> The Ruck and Rover for this sprint are Gabriele Cerami (panda) and Rafael Folco (rfolco). Please tell them any CI issue. +--> Promotion on master is 11 days, 0 day on Queens, 3 days on Pike and 4 days on Ocata. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Need review on work for updates/upgrades with external installers: https://review.openstack.org/#/q/status:open+branch:master+topic:external-update-upgrade +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> No major update this week, in bug fixing mode. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> No updates this week.. +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> config-download support work is landed! +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Need review on custom validations support. +--> Efforts around Mistral workflow lookup plugin +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Policy based routing for os-net-config +--> Patches for Neutron routed networks support using segments for TripleO +--> Ansible ML2 driver: good progress on patches and testing. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> No updates this week. +--> Last meeting notes: http://eavesdrop.openstack.org/meetings/security_squad/2018/security_squad.2018-07-18-12.07.html +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls have far-sighted, tubular eyes: instead of spherical eyeballs, owls have "eye tubes" that go far back into their skulls - which means their eyes are fixed in place, so they have to turn their heads to see. The size of their eyes helps them see in the dark, and they're far-sighted, which allows them to spot prey from yards away. Up close, everything is blurry, and they depend on small, hair-like feathers on their beaks and feet to feel their food. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Jul 31 16:57:56 2018 From: aspiers at suse.com (Adam Spiers) Date: Tue, 31 Jul 2018 17:57:56 +0100 Subject: [openstack-dev] [ptg] Self-healing SIG meeting moved to Thursday morning In-Reply-To: <3ee9cf15-4587-7884-f8fd-b00ec22549fc@openstack.org> References: <3ee9cf15-4587-7884-f8fd-b00ec22549fc@openstack.org> Message-ID: <20180731165755.dqxgittuzao2sdhu@pacific.linksys.moosehall> Thierry Carrez wrote: >Hi! Quick heads-up: > >Following a request[1] from Adam Spiers (SIG lead), we modified the >PTG schedule to move the Self-Healing SIG meeting from Friday (all >day) to Thursday morning (only morning). You can see the resulting >schedule at: > >https://www.openstack.org/ptg#tab_schedule > >Sorry for any inconvenience this may cause. It's me who should be apologising - Thierry only deserves thanks for accommodating my request at late notice ;-) From miguel at mlavalle.com Tue Jul 31 17:26:00 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 31 Jul 2018 12:26:00 -0500 Subject: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring In-Reply-To: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> References: <6345119E91D5C843A93D64F498ACFA136999ECF2@SHSMSX101.ccr.corp.intel.com> Message-ID: Hi Forrest, Yes, in my email, I was precisely referring to the work around https://review.openstack.org/#/c/574477. Now that we are wrapping up Rocky, I wanted to raise the visibility of this spec. I am glad you noticed. This week we are going to cut our RC-1 and I don't anticipate that we will will have a RC-2 for Rocky. So starting next week, let's go back to the spec and refine it, so we can start implementing in Stein as soon as possible. Depending on how much progress we make in the spec, we may need to schedule a discussion during the PTG in Denver, September 10 - 14, in case face to face time is needed to reach an agreement. I know that Manjeet is going to attend the PTG and he has already talked to me about this spec in the recent past. So maybe Manjeet could be the conduit to represent this spec in Denver, in case we need to talk about it there Best regards Miguel On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest wrote: > Hi Miguel, > > > > In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port > mirroring for SR-IOV VF to VF mirroring” is within Stein goal. > > > > Could you tell where is the place to discuss the design for this feature? > Mailing list, IRC channel, weekly meeting or others? > > > > I was involved in its spec review at https://review.openstack.org/# > /c/574477/; but it has not been updated for a while. > > > > Thanks, > > Forrest > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prad at redhat.com Tue Jul 31 17:31:17 2018 From: prad at redhat.com (Pradeep Kilambi) Date: Tue, 31 Jul 2018 13:31:17 -0400 Subject: [openstack-dev] [tripleo][ci][metrics] FFE request for QDR integration in TripleO (Was: Stucked in the middle of work because of RDO CI) In-Reply-To: References: Message-ID: Hi Alex: Can you consider this our FFE for the QDR patches. Its mainly blocked on CI issues. Half the patches for QDR integration are already merged. The other 3 referenced need to get merged once CI passes. Please consider this out formal request for FFE for QDR integration in tripleo. Cheers, ~ Prad On Tue, Jul 31, 2018 at 7:40 AM Sagi Shnaidman wrote: > Hi, Martin > > I see master OVB jobs are passing now [1], please recheck. > > [1] http://cistatus.tripleo.org/ > > On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr wrote: > >> Greetings guys, >> >> it is pretty obvious that RDO CI jobs in TripleO projects are broken >> [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd >> patches ([1],[2],[3]) merged please even though the negative result of RDO >> CI jobs? Half of the patches for this feature is merged and the other half >> is stucked in this situation, were nobody reviews these patches, because >> there is red -1. Those patches passed Zuul jobs several times already and >> were manually tested too. >> >> Thanks in advance for consideration of this situation, >> Martin >> >> [0] >> https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure >> [1] https://review.openstack.org/#/c/578749 >> [2] https://review.openstack.org/#/c/576057/ >> [3] https://review.openstack.org/#/c/572312/ >> >> -- >> Martin Mágr >> Senior Software Engineer >> Red Hat Czech >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards > Sagi Shnaidman > -- Cheers, ~ Prad -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Tue Jul 31 17:39:36 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 31 Jul 2018 18:39:36 +0100 Subject: [openstack-dev] [designate][stable] Stable Core Team Updates Message-ID: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> Hi Stable Team, I would like to nominate 2 new stable core reviewers for Designate. * Erik Olof Gunnar Andersson * Jens Harbott (frickler) Erik has been doing a lot of stable reviews recently, and Jens has shown that he understands the policy in other reviews (and has stable rights on other repositories (like DevStack) already). Thanks, Graham Hayes 1 - https://review.openstack.org/#/q/(project:openstack/designate+OR+project:openstack/python-designateclient+OR+project:openstack/designate-specs+OR+project:openstack/designate-dashboard+OR+project:openstack/designate-tempest-plugin)+branch:%255Estable/.*+reviewedby:%22Erik+Olof+Gunnar+Andersson+%253Ceandersson%2540blizzard.com%253E%22 2 - https://review.openstack.org/#/q/(project:openstack/designate+OR+project:openstack/python-designateclient+OR+project:openstack/designate-specs+OR+project:openstack/designate-dashboard+OR+project:openstack/designate-tempest-plugin)+branch:%255Estable/.*+reviewedby:%22Jens+Harbott+(frickler)+%253Cj.harbott%2540x-ion.de%253E%22 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From openstack at fried.cc Tue Jul 31 17:42:10 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 31 Jul 2018 12:42:10 -0500 Subject: [openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal In-Reply-To: <49db1d12-1fd3-93d1-a31e-8a2a5a35654d@intel.com> References: <49db1d12-1fd3-93d1-a31e-8a2a5a35654d@intel.com> Message-ID: Sundar- > * Cyborg drivers deal with device-specific aspects, including > discovery/enumeration of devices and handling the Device Half of the > attach (preparing devices/accelerators for attach to an instance, > post-attach cleanup (if any) after successful attach, releasing > device/accelerator resources on instance termination or failed > attach, etc.) > * os-acc plugins deal with hypervisor/system/architecture-specific > aspects, including handling the Instance Half of the attach (e.g. > for libvirt with PCI, preparing the XML snippet to be included in > the domain XML). This sounds well and good, but discovery/enumeration will also be hypervisor/system/architecture-specific. So... > Thus, the drivers and plugins are expected to be complementary. For > example, for 2 devices of types T1 and T2, there shall be 2 separate > Cyborg drivers. Further, we would have separate plugins for, say, > x86+KVM systems and Power systems. We could then have four different > deployments -- T1 on x86+KVM, T2 on x86+KVM, T1 on Power, T2 on Power -- > by suitable combinations of the drivers and plugins. ...the discovery/enumeration code for T1 on x86+KVM (lsdev? lspci? walking the /dev file system?) will be totally different from the discovery/enumeration code for T1 on Power (pypowervm.wrappers.ManagedSystem.get(adapter)). I don't mind saying "drivers do the device side; plugins do the instance side" but I don't see getting around the fact that both "sides" will need to have platform-specific code. > One secondary detail to note is that Nova compute calls os-acc per > instance for all accelerators for that instance, not once for each > accelerator. You mean for getVAN()? Because AFAIK, os_vif.plug(list_of_vif_objects, InstanceInfo) is *not* how nova uses os-vif for plugging. Thanks, Eric . From jlibosva at redhat.com Tue Jul 31 17:53:57 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 31 Jul 2018 19:53:57 +0200 Subject: [openstack-dev] [FFE][requirements] Bump ansible-runner u-c to 1.0.5 In-Reply-To: <20180731164651.vme55zyy3l64ibnh@gentoo.org> References: <7c8ac9a6-c99f-37fc-2288-c5346efd87d3@redhat.com> <20180731164651.vme55zyy3l64ibnh@gentoo.org> Message-ID: <546818f2-2aa8-3192-16a2-c775d24e646d@redhat.com> On 31/07/2018 18:46, Matthew Thode wrote: > On 18-07-31 17:30:02, Jakub Libosvar wrote: >> Hi all, >> >> I want to ask for FFE at this time to bump upper-constraint version of >> ansible-runner library from 1.0.4 to 1.0.5. >> >> Reason: ansible-runner 1.0.4 has an issue when running with currently >> used eventlet version because of missing select.poll() in eventlet [1]. >> The fix [2] is present in 1.0.5 ansible-runner version. >> >> Impact: networking-ansible project uses Neutron project and >> ansible-runner together and Neutron monkey patches code with eventlet. >> This fails all operations at networking-ansible. >> >> Statement: networking-ansible is the only project using ansible-runner >> in OpenStack world [3] so if we release Rocky with 1.0.4, the only >> project using it becomes useless. Bumping the version at this later >> stage will not affect any other project beside networking-ansible. >> >> [1] https://github.com/ansible/ansible-runner/issues/90 >> [2] >> https://github.com/ansible/ansible-runner/commit/5608e786eb96408658604e75ef3db3c9a6b39308 >> [3] http://codesearch.openstack.org/?q=ansible-runner&i=nope&files=&repos= > > Looks good, you may want to update the minimum in networking-ansible, > but lgtm otherwise. > > | networking-ansible | requirements.txt | 5 | ansible-runner>=1.0.3 # Apache-2.0 | Yep, thanks for reminder. I've done that here https://review.openstack.org/#/c/587475/ where the workaround was removed. Jakub > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From williamsalles at gmail.com Tue Jul 31 18:17:14 2018 From: williamsalles at gmail.com (william sales) Date: Tue, 31 Jul 2018 15:17:14 -0300 Subject: [openstack-dev] [Tacker] - TACKER + NETWORKING_SFC + NSH Message-ID: Hello guys, is there any version of Tacker that allows the use of networking_sfc with NSH? Thankful. William Sales -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jul 31 19:15:08 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 31 Jul 2018 14:15:08 -0500 Subject: [openstack-dev] [requirements][ffe] Critical bug found in python-cinderclient Message-ID: <20180731191507.GA4366@sm-workstation> A critical bug has been found in python-cinderclient that is impacting both horizon and python-openstackclient (at least). https://bugs.launchpad.net/cinder/+bug/1784703 tl;dr is, something new was added with a microversion, but support for that was done incorrectly such that nothing less than that new microversion would be allowed. This patch addresses the issue: https://review.openstack.org/587601 Once that lands we will need a new python-cinderclient release to unbreak clients. We may want to blacklist python-cinderclient 4.0.0, but I think at least just raising the upper-constraints should get things working again. Sean From doug at doughellmann.com Tue Jul 31 19:50:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 31 Jul 2018 15:50:42 -0400 Subject: [openstack-dev] [requirements][ffe] Critical bug found in python-cinderclient In-Reply-To: <20180731191507.GA4366@sm-workstation> References: <20180731191507.GA4366@sm-workstation> Message-ID: <1533066509-sup-992@lrrr.local> Excerpts from Sean McGinnis's message of 2018-07-31 14:15:08 -0500: > A critical bug has been found in python-cinderclient that is impacting both > horizon and python-openstackclient (at least). > > https://bugs.launchpad.net/cinder/+bug/1784703 > > tl;dr is, something new was added with a microversion, but support for that was > done incorrectly such that nothing less than that new microversion would be > allowed. This patch addresses the issue: > > https://review.openstack.org/587601 > > Once that lands we will need a new python-cinderclient release to unbreak > clients. We may want to blacklist python-cinderclient 4.0.0, but I think at > least just raising the upper-constraints should get things working again. > > Sean > Both adding the exclusion and changing the upper constraint makes sense, since it will ensure that bad version never makes it back into the constraints list. We don't need to sync the exclusion setting into all of the projects that depend on the client, so we won't need a new release of any of the downstream consumers. We could add the exclusion to OSC on master, just for accuracy's sake. Doug From smonderer at vasonanetworks.com Tue Jul 31 19:54:30 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 31 Jul 2018 22:54:30 +0300 Subject: [openstack-dev] [tripleo] deployement fails In-Reply-To: References: Message-ID: I used the same host network configuration I used with Ocata (see attached) Do I need to change them if I'm deploying queens?? Thanks, Samuel On Tue, Jul 31, 2018 at 7:06 PM Alex Schultz wrote: > On Mon, Jul 30, 2018 at 8:48 AM, Samuel Monderer > wrote: > > Hi, > > > > I'm trying to deploy a small environment with one controller and one > compute > > but i get a timeout with no specific information in the logs > > > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > > CREATE_IN_PROGRESS state changed > > 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]: > > CREATE_COMPLETE state changed > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED CREATE > > aborted (Task create from ResourceGroup "ComputeGammaV3" Stack > "overcloud" > > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED Stack > UPDATE > > cancelled > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED Stack > > CREATE cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED CREATE > aborted > > (Task create from ResourceGroup "Controller" Stack "overcloud" > > [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out) > > 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED Timed out > > 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED Stack UPDATE > > cancelled > > 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED Stack > CREATE > > cancelled > > 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED > > resources[0]: Stack CREATE cancelled > > > > Stack overcloud CREATE_FAILED > > > > overcloud.ComputeGammaV3.0: > > resource_type: OS::TripleO::ComputeGammaV3 > > physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > overcloud.Controller.0: > > resource_type: OS::TripleO::Controller > > physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7 > > status: CREATE_FAILED > > status_reason: | > > resources[0]: Stack CREATE cancelled > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Not cleaning temporary directory /tmp/tripleoclient-vxGzKo > > Heat Stack create failed. > > Heat Stack create failed. > > (undercloud) [stack at staging-director ~]$ > > > > So this is a timeout likely caused by a bad network configuration so > no response makes it back to Heat during the deployment. Heat never > gets a response back so it just times out. You'll need to check your > host network configuration and trouble shoot that. > > Thanks, > -Alex > > > It seems that it wasn't able to configure the OVS bridges > > > > (undercloud) [stack at staging-director ~]$ openstack software deployment > show > > 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > > > +---------------+--------------------------------------------------------+ > > | Field | Value > | > > > +---------------+--------------------------------------------------------+ > > | id | 4b4fc54f-7912-40e2-8ad4-79f6179fe701 > | > > | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b > | > > | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f > | > > | creation_time | 2018-07-30T13:19:44Z > | > > | updated_time | > | > > | status | IN_PROGRESS > | > > | status_reason | Deploy data available > | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} > | > > | action | CREATE > | > > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ openstack software deployment > show > > a297e8ae-f4c9-41b0-938f-c51f9fe23843 > > > +---------------+--------------------------------------------------------+ > > | Field | Value > | > > > +---------------+--------------------------------------------------------+ > > | id | a297e8ae-f4c9-41b0-938f-c51f9fe23843 > | > > | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84 > | > > | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f > | > > | creation_time | 2018-07-30T13:17:29Z > | > > | updated_time | > | > > | status | IN_PROGRESS > | > > | status_reason | Deploy data available > | > > | input_values | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} > | > > | action | CREATE > | > > > +---------------+--------------------------------------------------------+ > > (undercloud) [stack at staging-director ~]$ > > > > Regards, > > Samuel > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ComputeV3.yaml Type: application/x-yaml Size: 4397 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: controller.yaml Type: application/x-yaml Size: 4269 bytes Desc: not available URL: From prometheanfire at gentoo.org Tue Jul 31 20:07:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 31 Jul 2018 15:07:50 -0500 Subject: [openstack-dev] [ptg][requiremets] - Stein Etherpad Message-ID: <20180731200750.47uivfdjoinfdbjr@gentoo.org> https://etherpad.openstack.org/p/stein-PTG-requirements That is all -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Tue Jul 31 21:15:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 31 Jul 2018 16:15:16 -0500 Subject: [openstack-dev] [designate][stable] Stable Core Team Updates In-Reply-To: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> References: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> Message-ID: <6574fab1-6d3f-0841-dbc2-e48252eb5ef2@gmail.com> On 7/31/2018 12:39 PM, Graham Hayes wrote: > I would like to nominate 2 new stable core reviewers for Designate. > > * Erik Olof Gunnar Andersson > * Jens Harbott (frickler) > > Erik has been doing a lot of stable reviews recently, and Jens has shown > that he understands the policy in other reviews (and has stable rights > on other repositories (like DevStack) already). > > Thanks, > > Graham Hayes > > 1 - > https://review.openstack.org/#/q/(project:openstack/designate+OR+project:openstack/python-designateclient+OR+project:openstack/designate-specs+OR+project:openstack/designate-dashboard+OR+project:openstack/designate-tempest-plugin)+branch:%255Estable/.*+reviewedby:%22Erik+Olof+Gunnar+Andersson+%253Ceandersson%2540blizzard.com%253E%22 > > 2 - > https://review.openstack.org/#/q/(project:openstack/designate+OR+project:openstack/python-designateclient+OR+project:openstack/designate-specs+OR+project:openstack/designate-dashboard+OR+project:openstack/designate-tempest-plugin)+branch:%255Estable/.*+reviewedby:%22Jens+Harbott+(frickler)+%253Cj.harbott%2540x-ion.de%253E%22 Looks OK to me on both. -- Thanks, Matt From sean.mcginnis at gmx.com Tue Jul 31 21:23:46 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 31 Jul 2018 16:23:46 -0500 Subject: [openstack-dev] [designate][stable] Stable Core Team Updates In-Reply-To: <6574fab1-6d3f-0841-dbc2-e48252eb5ef2@gmail.com> References: <1fd50f5e-9aa4-8c6d-729f-eecac4d7d5e6@ham.ie> <6574fab1-6d3f-0841-dbc2-e48252eb5ef2@gmail.com> Message-ID: <20180731212345.GA19289@sm-workstation> On Tue, Jul 31, 2018 at 04:15:16PM -0500, Matt Riedemann wrote: > On 7/31/2018 12:39 PM, Graham Hayes wrote: > > I would like to nominate 2 new stable core reviewers for Designate. > > > > * Erik Olof Gunnar Andersson > > * Jens Harbott (frickler) > > > Looks OK to me on both. > > -- > > Thanks, > > Matt > +1 From whayutin at redhat.com Tue Jul 31 21:51:44 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 31 Jul 2018 15:51:44 -0600 Subject: [openstack-dev] [tripleo][ci][metrics] Stucked in the middle of work because of RDO CI In-Reply-To: References: Message-ID: On Tue, Jul 31, 2018 at 7:41 AM Sagi Shnaidman wrote: > Hi, Martin > > I see master OVB jobs are passing now [1], please recheck. > > [1] http://cistatus.tripleo.org/ > Things have improved and I see a lot of jobs passing however at the same time I see too many jobs failing due to node_failures. We are tracking the data from [1]. Certainly the issue is NOT ideal for development and we need to remain focused on improving the situation. Thanks [1] https://softwarefactory-project.io/zuul/api/tenant/rdoproject.org/builds > > > On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr wrote: > >> Greetings guys, >> >> it is pretty obvious that RDO CI jobs in TripleO projects are broken >> [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd >> patches ([1],[2],[3]) merged please even though the negative result of RDO >> CI jobs? Half of the patches for this feature is merged and the other half >> is stucked in this situation, were nobody reviews these patches, because >> there is red -1. Those patches passed Zuul jobs several times already and >> were manually tested too. >> >> Thanks in advance for consideration of this situation, >> Martin >> >> [0] >> https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure >> [1] https://review.openstack.org/#/c/578749 >> [2] https://review.openstack.org/#/c/576057/ >> [3] https://review.openstack.org/#/c/572312/ >> >> -- >> Martin Mágr >> Senior Software Engineer >> Red Hat Czech >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Wes Hayutin Associate MANAGER Red Hat w hayutin at redhat.com T: +1919 <+19197544114>4232509 IRC: weshay View my calendar and check my availability for meetings HERE -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Jul 31 22:09:44 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 31 Jul 2018 16:09:44 -0600 Subject: [openstack-dev] [tripleo][ci][metrics] FFE request for QDR integration in TripleO (Was: Stucked in the middle of work because of RDO CI) In-Reply-To: References: Message-ID: On Tue, Jul 31, 2018 at 11:31 AM, Pradeep Kilambi wrote: > Hi Alex: > > Can you consider this our FFE for the QDR patches. Its mainly blocked on CI > issues. Half the patches for QDR integration are already merged. The other 3 > referenced need to get merged once CI passes. Please consider this out > formal request for FFE for QDR integration in tripleo. > Ok if it's just these patches and there is no further work it should be OK. I did point out (prior to CI issues) that the patch[0] actually broke the ovb jobs back in June. It seemed to be related to missing containers or something to that effect. So we'll need to be extra care when merging this to ensure it does not break anything. If we get clean jobs prior to the rc1, we can merge it. If not I'd say we need to hold off. I don't consider this is a blocking feature. Thanks, -Alex [0] https://review.openstack.org/#/c/578749/ > Cheers, > ~ Prad > > On Tue, Jul 31, 2018 at 7:40 AM Sagi Shnaidman wrote: >> >> Hi, Martin >> >> I see master OVB jobs are passing now [1], please recheck. >> >> [1] http://cistatus.tripleo.org/ >> >> On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr wrote: >>> >>> Greetings guys, >>> >>> it is pretty obvious that RDO CI jobs in TripleO projects are broken >>> [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd >>> patches ([1],[2],[3]) merged please even though the negative result of RDO >>> CI jobs? Half of the patches for this feature is merged and the other half >>> is stucked in this situation, were nobody reviews these patches, because >>> there is red -1. Those patches passed Zuul jobs several times already and >>> were manually tested too. >>> >>> Thanks in advance for consideration of this situation, >>> Martin >>> >>> [0] >>> https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure >>> [1] https://review.openstack.org/#/c/578749 >>> [2] https://review.openstack.org/#/c/576057/ >>> [3] https://review.openstack.org/#/c/572312/ >>> >>> -- >>> Martin Mágr >>> Senior Software Engineer >>> Red Hat Czech >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Best regards >> Sagi Shnaidman > > > > -- > Cheers, > ~ Prad From tony at bakeyournoodle.com Tue Jul 31 23:55:13 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 1 Aug 2018 09:55:13 +1000 Subject: [openstack-dev] [all][election] PTL nominations are now closed Message-ID: <20180731235512.GB15918@thor.bakeyournoodle.com> Hello all, The PTL Nomination period is now over. The official candidate list is available on the election website[0]. There are 8 projects without candidates, so according to this resolution[1], the TC will have to decide how the following projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm, RefStack, Searchlight, Trove and Winstackers. There are 2 projects that will have elections: Senlin, Tacker. The details for those will be posted shortly after we setup the CIVS system. Thank you, [0] http://governance.openstack.org/election/#stein-ptl-candidates [1] http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: