From stdake at cisco.com Thu Feb 1 00:08:45 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 1 Feb 2018 00:08:45 +0000 Subject: [openstack-dev] [kolla] PTL non candidacy In-Reply-To: <58FD7494-FBE4-4BD1-A912-390EB96E6CBE@cisco.com> References: <58FD7494-FBE4-4BD1-A912-390EB96E6CBE@cisco.com> Message-ID: <4F5B0BB6-8BDB-440A-A506-5EAFD4144C7C@cisco.com> Cheers Michal. Thank you for your service. Regards -steve From: Vikram Hosakote Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, January 31, 2018 at 1:04 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [kolla] PTL non candidacy Thanks for being a great PTL for kolla Michał ☺ You’re not announcing your non-candidacy for drinking with your OpenStack friends and this does not mean we can’t drink together ;) Regards, Vikram Hosakote IRC: vhosakot From: Michał Jastrzębski Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, January 10, 2018 13, 2017 at 10:50 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [kolla] PTL non candidacy Hello, A bit earlier than usually, but I'd like to say that I won't be running for PTL reelection for Rocky cycle. I had privilege of being PTL of Kolla for last 3 cycles and I would like to thank Kolla community for this opportunity and trust. I'm very proud of what we've accomplished over last 3 releases and I'm sure we will accomplish even greater things in the future! It's good for project to change leadership every now and then. I would encourage everyone in community to consider running, I can promise that this job is ... very interesting;) and extremely rewarding! Thank you all for support and please, support new PTL as much as you supported me. Regards, Michal __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Feb 1 01:38:51 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 31 Jan 2018 17:38:51 -0800 Subject: [openstack-dev] [nova] PTL candidacy Message-ID: <2BF3652C-1285-44B9-BAD6-10228F82DE44@gmail.com> Hello Stackers, I would like to announce my candidacy for Nova PTL in the Rocky cycle. I have been a core reviewer on Nova since May 2015 and have been working with OpenStack since mid 2012 (ancient times!) and you may know me on IRC as melwitt. I have been both a user and a developer on OpenStack, so I think I see things from a bit more of an all-around perspective. I’ve worked in a company where we ran private OpenStack clouds, so I’ve done deployments, I’ve debugged production issues, I’ve worked on custom internal-only patches -- you name it. Having experienced all of these situations has given me a special interest in solving problems for users, operators, and developers. It has been my mission to work with all of you to help make Nova better. Like Matt, I see the Nova PTL role as a service position to the community. The PTL is there to help the team get things done in Nova. That means keeping track of the schedule, keeping tabs on ongoing work and helping people make progress if they’re stuck, facilitating cross-project communication when we’re working on things that integrate with other projects, and easing contribution to the project. On the last point, I have some ideas around easing contribution, mostly having to do with situations where someone may have researched a bug, found a root cause, and can propose a patch, but don’t have the time or knowhow to provide a patch complete with test coverage. In cases I have seen, I reached out to the person and asked if they would mind if I wrote tests for their patch and added myself as co-author. Not only did they not mind, they were happy I offered. So, as an experiment, I would like to keep a list of patches (in the Priorities etherpad) where authors add links to patches they’d like help with in exchange for co-authorship. If a more experienced contributor finds a patch in the list that they’re interested in, they can jump in and fill in the gaps so we end up with a complete patch ready-for-review. In this way, I would like to try to give less experienced authors the opportunity to pair with more experienced authors on patches. Speaking of the Priorities etherpad [1], I’d like to bring it back to active use. It’s a good way IMHO to track ready-for-review patches on the various sub-teams, virt drivers, and blueprints that we have. I think we have been good at reviewing high priority project patch sets but I think we could use more focus on reviewing lower priority blueprint work. I’d like to add a section for approved blueprints to increase visibility on those patches, so that ready-for-review patches don’t get lost in the shuffle. Regretfully, there have been a number of blueprint patches that were ready for review early on in the cycle and did not receive review for lack of visibility. I’d like to do something to keep track of those patches and get the ball rolling on review earlier in the cycle, before the higher priority work ramps up too much. I think keeping a section in the Priorities etherpad for these could help, along with a brief report/reminder of that section’s status in Nova meetings. Bug triage has fallen a bit behind in more recent cycles and I’ve been thinking about how we could improve that. In the past, we had a model where we tag bugs with an area (like ‘api’, ‘compute’, ‘volumes’) and tag owners [2] were responsible for triaging bugs with their tag. The idea is that bug tagging (categorizing) is a quick and simple task that doesn’t require much time. Then, the more time-consuming task of triaging the validity and severity of bugs is load-balanced among tag owners. I’d like to refresh the bug tag owner list and see if we can get back into a pattern where we can spread out bug triage among the team and make more progress there. Another area that could be improved is our communication of “low hanging fruit” work suitable for newer contributors. To be honest, I don’t find much in Nova to be “low hanging fruit” but do want to make an effort to collect and maintain a list of bugs and tasks that are better starting points for people who want to work on Nova. Long ago, we had the low-hanging-fruit etherpad [3] and I’d like to resurrect it to keep track of things that newer contributors could pick up. I think Matt has done great work to increase communication across projects we integrate with and with the operator community. I would like to continue that work to maintain those relationships and continue to keep the operator community apprised of changes in Nova that will affect them. I know we are not perfect here but we endeavor to keep the communication lines open and I, for one, welcome feedback on areas we fall short so that we can improve. We got a lot done in Queens, completing 36 out of 42 approved blueprints. I would like to keep that momentum going into the Rocky cycle. I have liked the model of using the PTG to discuss and prioritize work for the cycle and maintaining a detailed PTG etherpad to organize the agenda, document action items, next steps, and conclusions for each topic. I think this model is especially helpful for those who cannot attend the PTG in person, in that they can present their topic for discussion by adding it to the etherpad, we discuss it at the PTG, and then we can follow up asynchronously afterward on IRC or the dev mailing list. Finally, testing and the health of the gate is important to me and I would like to continue the work we’ve done here to actively monitor our test jobs, increase our test coverage, and troubleshoot and solve issues in the gate so that everyone can get their patches tested and merged smoothly. I know things have been challenging in this area lately, but we’ve all worked hard as a team to jump on each problem as it crops up and I’ve been proud of how much everyone has stepped up to help. I feel like I’ve written a lot here, thank you for reading and I appreciate your consideration. -melanie [1] https://etherpad.openstack.org/p/pike-nova-priorities-tracking [2] https://wiki.openstack.org/wiki/Nova/BugTriage#Tag_Owner_List [3] https://etherpad.openstack.org/p/nova-low-hanging-fruit From zhipengh512 at gmail.com Thu Feb 1 01:49:24 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 1 Feb 2018 09:49:24 +0800 Subject: [openstack-dev] [cyborg][ptl]PTL Candidacy for Rocky Message-ID: Hi Team, I would like to express my interest in continuing to serve as PTL of Cyborg project for Rocky release. I know we have been in a fanatic state for our first release preparation, but when I looked back to the early days when nomad (cyborg's prelife) project kicks off, I feel I'm blessed to be able to work with such a great community and a great dev team, to get where we are now. We took it in our pride that cyborg is one of the few project that is grown entirely in the OpenStack community from the very beginning: no vendor code dump, design discussion from scratch, write every bit of code from zero. Heck we even changed the project name based upon team voting/consensus. I have to admit I also had doubt on how a project like cyborg will really survive, given that we rely so heavily on community effort rather than single vendor human resource investment which is often guaranteed. And also the problem that we are such a young team that none of us is veteran OpenStacker. I'm then hugely encouraged and fascinated by how far we are able to get. For Pike, the three core members from three different companies came up with the basic framework. And for Queens, with more devs joining we will pulled of *our own first implementation of the nested resource provider*, *a basic interaction framework between cyborg-conductor and nova-placement*, *a generic driver that will be a reference implementation for vendor drivers*,* Intel FPGA driver* (our first vendor driver !), *SPDK driver *(the first software accelerator driver !), *cyborg-agent resource tracker*, and so on and so forth. I can't really ask more from this amazing team. Moreover for the First Contact SIG initiative, we also established a vibrant Cyborg China Dev community via Wechat group and held many ad-hoc meetings. This effort has helped tremendously on our Queens delivery in the making. We also joined with Nova team to kickoff the Resource Management SIG which will help align the resource data modeling across OpenStack and other related open source communities. With Queens release work still lingering around, the plan for Rocky was actually bounced over several times during team discussion, and here are the items that in my mind are important target: (1) Adopt MS based development cadence. Queens was a chaotic process and I think we learnt a hard lesson from it (2) Finishing up the implementation for Nova-Cyborg interaction spec. (3) Quota and multi-tenancy support (4) FPGA Programmability support. (5) NVIDIA GPU/ARM SoC driver support (6) Meta data standardization. We might take a lesson from Device Tree. (7) Containerization support for Kubernetes DPI plugin. Thanks for your patience on reading through this long email. Again I want to express my will to continue serving cyborg project in Rocky and look forward to your support or any questions you might have. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Thu Feb 1 04:49:11 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Thu, 1 Feb 2018 13:49:11 +0900 Subject: [openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects) In-Reply-To: <87shalgb8x.fsf@meyer.lemoncheese.net> References: <87shalgb8x.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Feb 1, 2018 at 2:59 AM, James E. Blair wrote: > Hi, > > Occasionally we will make changes to the Zuul configuration language. > Usually these changes will be backwards compatible, but whether they are > or not, we still want to move things forward. > > Because Zuul's configuration is now spread across many repositories, it > may take many changes to do this. I'm in the process of making one such > change now. > > Zuul no longer requires the project name in the "project:" stanza for > in-repo configuration. Removing it makes it easier to fork or rename a > project. > > I am using a script to create and upload these changes. Because changes > to Zuul's configuration use more resources, I, and the rest of the infra > team, are carefully monitoring this and pacing changes so as not to > overwhelm the system. This is a limitation we'd like to address in the > future, but we have to live with now. > > So if you see such a change to your project (the topic will be > "zuulv3-projects"), please observe the following: > > * Go ahead and approve it as soon as possible. > > * Don't be strict about backported change ids. These changes are only > to Zuul config files, the stable backport policy was not intended to > apply to things like this. > > * Don't create your own versions of these changes. My script will > eventually upload changes to all affected project-branches. It's > intentionally a slow process, and attempting to speed it up won't > help. But if there's something wrong with the change I propose, feel > free to push an update to correct it. 1. is it ok to create my version of the change when making other changes to the file? eg. https://review.openstack.org/#/c/538200/ 2. as a reviewer, what should i do to the similar changes which is not yours? https://review.openstack.org/#/q/topic:zuulv3-projects+NOT+owner:%22James+E.+Blair+%253Ccorvus%2540inaugust.com%253E%22 > > Thanks, > > Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Thu Feb 1 05:05:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 31 Jan 2018 23:05:50 -0600 Subject: [openstack-dev] [requirements][ptl] Announcing my candidacy for PTL of the requirements project Message-ID: <20180201050550.dl5ps6nrll3awb6n@gentoo.org> I would like to announce my candidacy for PTL of the Requirements project for the Rocky cycle. The following will be my goals for the cycle, in order of importance: 1. The primary goal is to keep a tight rein on global-requirements and upper-constraints updates. 2. Speaking of global-requirements updates, the primary goal this cycle will continue to work on project specific requirements. All projects would continue to use upper-constraints.txt to ensure co-installability, but would be able to manage their requirements.txt file, largely on their own. https://bugs.launchpad.net/openstack-requirements/+bug/1719009 is the bug tracking per-project requirements. 3. Un-cap requirements where possible (stuff like eventlet). 4. Publish constraints and requirements to streamline the freeze process. https://bugs.launchpad.net/openstack-requirements/+bug/1719006 is the bug tracking the publish job. 5. Audit global-requirements and upper-constraints for redundancies. One of the rules we have for new entrants to global-requirements and/or upper-constraints is that they be non-redundant. Keeping that rule in mind, audit the list of requirements for possible redundancies and if possible, reduce the number of requirements we manage. 6. Audit global-requirements minimums. While technically not supported from a co-installability or stability standpoint (as it's not tested), we should endeavor to have the global-requirements minimums work in Openstack. Ensuring any bugs, features or changes that have happened since the global-requirements minimum was released til the upper-constraints was defined are not NEEDED by Openstack. This will rely on the cross gating work tonyb did. I look forward to continue working with you in this cycle, as your PTL or not. Thanks for your time, Matthew Thode IRC: prometheanfire -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From chenyingko at gmail.com Thu Feb 1 06:54:50 2018 From: chenyingko at gmail.com (Chen Ying) Date: Thu, 1 Feb 2018 14:54:50 +0800 Subject: [openstack-dev] [Karbor] [ptl] PTL candidacy for Rocky Message-ID: Hi all, I would like to nominate myself to be the Karbor PTL for the Rocky cycle. I began to contribute to Karbor project since 2016.01, as a core reviewer from Newton cycle, and as Karbor PTL for the Queens cycle. It is my pleasure to work with the great team to make this project better and better. In Queens we have done a lot of great works about OpenStack resources protection in karbor: API support the checkpoint verification. Support quotas. Support checkpoint cross AZ copy API. Cross-site backup and restore. Support freezer protection plugin. Support K8S pods protection integration. Implement policies in code. Other achievements are: Support operation log api. Support API json schema validation. Support service management API and so on. For the next cycle I'd like to focus on the tasks as follows: - Grow the Karbor team of contributors - Cross projects integration and improvement - Make project goals. Also following community goals and project goals to make sure it will complete. - Usability: documentation, karbor client and Karbor Horizon. Ensure Karbor Horizon and client continue to be a robust and easy-to-use tool for Karbor. If you have any ideas on these points we're always happy to discuss and correct our plans. We're always happy to get new contributors on the project and always ready to help people interested in Karbor development get up to speed. I'm excited to continue contributing to Karbor. Best Regards, Chen Ying -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Thu Feb 1 07:22:04 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 1 Feb 2018 08:22:04 +0100 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> Message-ID: <27b4cffd-8ffb-3afc-7ae5-8c8b6854c31e@suse.com> On 2018-01-31 22:58, Akihiro Motoki wrote: > I don't think we need to drop translation support NOW (at least for > neutron-lbaas-dashboard). > There might be fixes which affects translation and/or there might be > translation improvements. > I don't think a deprecation means no translation fix any more. It > sounds too aggressive. > Is there any problem to keep translations for them? Reading the whole FAQ - since bug fixes are planned, translations can merge back. So, indeed we can keep translation infrastructure set up. I recommend to translators to remove neutron-lbaas-dashboard from the priority list, Andreas > Akihiro > > 2018-02-01 3:28 GMT+09:00 Andreas Jaeger : >> In that case, I suggest to remove translation jobs for these repositories, >> >> Andreas >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Thu Feb 1 07:26:17 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 1 Feb 2018 08:26:17 +0100 Subject: [openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects) In-Reply-To: References: <87shalgb8x.fsf@meyer.lemoncheese.net> Message-ID: <0bb6c7ac-785e-16f4-557e-1d82320ce46e@suse.com> On 2018-02-01 05:49, Takashi Yamamoto wrote: > [...] >> * Don't create your own versions of these changes. My script will >> eventually upload changes to all affected project-branches. It's >> intentionally a slow process, and attempting to speed it up won't >> help. But if there's something wrong with the change I propose, feel >> free to push an update to correct it. > 1. is it ok to create my version of the change when making other > changes to the file? eg. https://review.openstack.org/#/c/538200/ If you add a *new* zuul.yaml file, do not add the project name stanza. His scripts will not catch new additions. If you change an existing file, wait for Jim's change. > 2. as a reviewer, what should i do to the similar changes which is not yours? > https://review.openstack.org/#/q/topic:zuulv3-projects+NOT+owner:%22James+E.+Blair+%253Ccorvus%2540inaugust.com%253E%22 I would block any new ones with a reference to the message. There's really no sense in doing those changes and cause extra work, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From slawek at kaplonski.pl Thu Feb 1 08:07:59 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Thu, 1 Feb 2018 09:07:59 +0100 Subject: [openstack-dev] [sdk][ptl] PTL Candidacy for Rocky In-Reply-To: References: Message-ID: Big +1 from me :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Monty Taylor w dniu 31.01.2018, o godz. 16:54: > > Hi everybody! > > I'd like to run for PTL of OpenStackSDK again > > This last cycle was pretty exciting. We merged the shade and openstacksdk projects into a single team. We shifted os-client-config to that team as well. We merged the code from shade and os-client-config into openstacksdk, and then renamed the team. > > It wasn't just about merging projects though. We got some rework done to base the Proxy classes on keystoneauth Adapters providing direct passthrough REST availability for services. We finished the Resource2/Proxy2 transition. We updated pagination to work for all of the OpenStack services - and in the process uncovered a potential cross-project goal. And we tied services in openstacksdk to services listed in the Service Types Authority. > > Moving forward, there's tons to do. > > First and foremost we need to finish integrating the shade code into the sdk codebase. The sdk layer and the shade layer are currently friendly but separate, and that doesn't make sense long term. To do this, we need to figure out a plan for rationalizing the return types - shade returns munch.Munch objects which are dicts that support object attribute access. The sdk returns Resource objects. > > There are also multiple places where the logic in the shade layer can and should move into the sdk's Proxy layer. Good examples of this are swift object uploads and downloads and glance image uploads. > > I'd like to move masakari and tricircle's out-of-tree SDK classes in tree. > > shade's caching and rate-limiting layer needs to be shifted to be able to apply to both levels, and the special caching for servers, ports and > floating-ips needs to be replaced with the general system. For us to do that though, the general system needs to be improved to handle nodepool's batched rate-limited use case as well. > > We need to remove the guts of both shade and os-client-config in their repos and turn them into backwards compatibility shims. > > We need to work with the python-openstackclient team to finish getting the current sdk usage updated to the non-Profile-based flow, and to make sure we're providing what they need to start replacing uses of python-*client with uses of sdk. > > I know the folks with the shade team background are going to LOVE this one, but we need to migrate existing sdk tests that mock sdk objects to requests-mock. (We also missed a few shade tests that still mock out methods on OpenStackCloud that need to get transitioned) > > Finally - we need to get a 1.0 out this cycle. We're very close - the main sticking point now is the shade/os-client-config layer, and specifically cleaning up a few pieces of shade's API that weren't great but which we couldn't change due to API contracts. > > I'm sure there will be more things to do too. There always are. > > In any case, I'd love to keep helping to pushing these rocks uphill. > > Thanks! > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From saverio.proto at switch.ch Thu Feb 1 08:56:10 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Thu, 1 Feb 2018 09:56:10 +0100 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: <053ab4a2-4159-ce84-5f1d-f88d1445f063@gmail.com> References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> <053ab4a2-4159-ce84-5f1d-f88d1445f063@gmail.com> Message-ID: Hello ! thanks for accepting the patch :) It looks like the best is always to send an email and have a short discussion together, when we are not sure about a patch. thank you Cheers, Saverio From mbooth at redhat.com Thu Feb 1 09:41:46 2018 From: mbooth at redhat.com (Matthew Booth) Date: Thu, 1 Feb 2018 09:41:46 +0000 Subject: [openstack-dev] [nova] Requesting eyes on fix for bug 1686703 In-Reply-To: <3896f266-0e7f-a1b0-68f2-06896e5cae72@gmail.com> References: <3896f266-0e7f-a1b0-68f2-06896e5cae72@gmail.com> Message-ID: On 31 January 2018 at 16:32, Matt Riedemann wrote: > On 1/31/2018 7:30 AM, Matthew Booth wrote: > >> Could I please have some eyes on this bugfix: >> https://review.openstack.org/#/c/462521/ . I addressed an issue raised >> in August 2017, and it's had no negative feedback since. It would be good >> to get this one finished. >> > > First, I'd like to avoid setting a precedent of asking for reviews in the > ML. So please don't do this. > I don't generally do this, but I think a polite request after 6 months or so is reasonable when something has fallen through the cracks. > Second, this is a latent issue, and we're less than two weeks to RC1, so > I'd prefer that we hold this off until Rocky opens up in case it introduces > any regressions so we at least have time to deal with those when we're not > in stop-ship mode. > That's fine. Looks like I have new feedback to address in the meantime anyway, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Thu Feb 1 11:48:52 2018 From: moshele at mellanox.com (Moshe Levi) Date: Thu, 1 Feb 2018 11:48:52 +0000 Subject: [openstack-dev] [tripleo] opendaylight OpenDaylightConnectionProtocol deprecation issue In-Reply-To: <19c3af79-c630-2089-c6d6-921f6d087a11@nemebean.com> References: <19c3af79-c630-2089-c6d6-921f6d087a11@nemebean.com> Message-ID: > -----Original Message----- > From: Ben Nemec [mailto:openstack at nemebean.com] > Sent: Wednesday, January 31, 2018 5:10 PM > To: OpenStack Development Mailing List (not for usage questions) > ; Moshe Levi > > Subject: Re: [openstack-dev] [tripleo] opendaylight > OpenDaylightConnectionProtocol deprecation issue > > > > On 01/29/2018 04:27 AM, Moshe Levi wrote: > > Hi all, > > > > It seem that this commit [1] deprecated the > > OpenDaylightConnectionProtocol, but it also remove it. > > > > This is causing the following issue when we deploy opendaylight non > > containerized. See [2] > > > > One solution is to add back the OpenDaylightConnectionProtocol [3] the > > other solution is to remove the OpenDaylightConnectionProtocol from > > the deprecated parameter_groups [4]. > > Looks like the deprecation was done incorrectly. The parameter should have > been left in place and referenced in the deprecated group. So I think the fix > would just be to put the parameter definition back. Ok I proposed this fix https://review.openstack.org/#/c/539917/ to resolve it. > > > > > [1] - > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit > > hub.com%2Fopenstack%2Ftripleo-heat- > templates%2Fcommit%2Faf4ce05dc5270b > > > 84864a382ddb2a1161d9082eab&data=02%7C01%7Cmoshele%40mellanox.co > m%7C5d3 > > > 64a8250a14e007a2608d568bca27e%7Ca652971c7d2e4d9ba6a4d149256f461b% > 7C0%7 > > > C0%7C636530081767314146&sdata=gNmuv%2FzlusnYp7TXI6t9dFIRbPRC2MDj > F5yoxa > > ktGGE%3D&reserved=0 > > > > > > [2] - > > > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpast > > > e.openstack.org%2Fshow%2F656702%2F&data=02%7C01%7Cmoshele%40m > ellanox.c > > > om%7C5d364a8250a14e007a2608d568bca27e%7Ca652971c7d2e4d9ba6a4d14 > 9256f46 > > > 1b%7C0%7C0%7C636530081767314146&sdata=AMfcY2FcaOm8lZNMOu3iYKYf > 4ecjgP18 > > 6im32Ujg1tE%3D&reserved=0 > > > > [3] - > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit > > hub.com%2Fopenstack%2Ftripleo-heat- > templates%2Fcommit%2Faf4ce05dc5270b > > 84864a382ddb2a1161d9082eab%23diff- > 21674daa44a327c016a80173efeb10e7L20& > > > data=02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a2608d > 568bca2 > > > 7e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636530081767314 > 146&sda > > > ta=5pyiYUINi%2FmQ1%2F19kaIJY2KQ35bHhbZ%2Fq7PvUnRFZP4%3D&reserv > ed=0 > > > > > > [4] - > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit > > hub.com%2Fopenstack%2Ftripleo-heat- > templates%2Fcommit%2Faf4ce05dc5270b > > 84864a382ddb2a1161d9082eab%23diff- > 21674daa44a327c016a80173efeb10e7R112 > > > &data=02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a260 > 8d568bca > > > 27e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63653008176731 > 4146&sd > > > ata=uUi6XJbZs6LOuGkqDD%2BWTLaZvso8U7srwbL%2BKXvGm44%3D&reser > ved=0 > > > > > > > > > > > __________________________________________________________ > ____________ > > ____ OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists. > openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack- > dev&data=02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a > 2608d568bca27e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636 > 530081767314146&sdata=m2R5u%2FXA6SnPFk%2FHW13W%2BCYtMbGUI9p > Ww%2B3U2qFmUaw%3D&reserved=0 > > From thierry at openstack.org Thu Feb 1 13:14:14 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 1 Feb 2018 14:14:14 +0100 Subject: [openstack-dev] PTG Dublin - Price Increase this Thursday In-Reply-To: <7FA0A5F3-82D4-48FE-A407-6F85AF4E57C7@openstack.org> References: <7FA0A5F3-82D4-48FE-A407-6F85AF4E57C7@openstack.org> Message-ID: <00c87bb7-dd28-aa0b-6fcb-47801ca03e2a@openstack.org> Reminder: Last hours to pick up your PTG ticket at normal price ! Kendall Waters wrote: > Hi everyone, > > We are four weeks out from the Dublin Project Teams Gathering (February > 26 - March 2nd), and we are expecting the event to sell out! You have > two more days to book your ticket at the normal price. We'll switch to > last-minute price (USD $200) on Thursday, February 1st at 12 noon CT > (18:00 UTC). So go and grab your ticket before the price increases! [1] > > Cheers, > Kendall > > [1] https://rockyptg.eventbrite.com -- Thierry Carrez (ttx) From ifat.afek at nokia.com Thu Feb 1 13:24:56 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 1 Feb 2018 13:24:56 +0000 Subject: [openstack-dev] [vitrage][ptl] PTL candidacy for Rocky Message-ID: <6E6DBE8A-5323-448B-95E2-F405E07C8E30@nokia.com> Hi all, I would like to announce my candidacy to continue as Vitrage PTL for the Rocky release. I’ve been the PTL of Vitrage since the day it started, in the Mitaka release. I think we have made an amazing journey, and we now have a mature, stable and well known project. During the Queens cycle our community has grown, and we managed to complete many important tasks, like: * API for template add and template delete * Enhancements in the templates language * API for registering web hooks on Vitrage alarms * Performance enhancements, mostly around parallel evaluation of the templates I believe that these new features will greatly improve the usability of Vitrage. As for the Rocky cycle, I think we have many challenging tasks in our road map. We have a great team which combines very experienced contributors and enthusiastic newcomers, and we are always happy to welcome new contributors. The issues that I think we should focus on are: * Alarm and resource aggregation * Proactive RCA (Root Cause Analysis) * RCA history * Kubernetes Support * API enhancements, mostly around the topology queries Hi all, I would like to announce my candidacy to continue as Vitrage PTL for the Rocky release. I’ve been the PTL of Vitrage since the day it started, in the Mitaka release. I think we have made an amazing journey, and we now have a mature, stable and well known project. During the Queens cycle our community has grown, and we managed to complete many important tasks, like: * API for template add and template delete * Enhancements in the templates language * API for registering web hooks on Vitrage alarms * Performance enhancements, mostly around parallel evaluation of the templates I believe that these new features will greatly improve the usability of Vitrage. As for the Rocky cycle, I think we have many challenging tasks in our road map. We have a great team which combines very experienced contributors and enthusiastic newcomers, and we are always happy to welcome new contributors. The issues that I think we should focus on are: * Alarm and resource aggregation * Proactive RCA (Root Cause Analysis) * RCA history * Kubernetes Support * API enhancements, mostly around the topology queries I look forward to working with you all in the coming cycle. Thanks, Ifat. From mihaela.balas at orange.com Thu Feb 1 13:53:39 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Thu, 1 Feb 2018 13:53:39 +0000 Subject: [openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected Message-ID: <10892_1517493221_5A731BE5_10892_333_15_849F1D1DBD4A00479343403412AE4F8201AB0BE3B3@ESSEN.office.orange.intra> Hello, I have the following setup: Neutron - Newton version Octavia - Ocata version Neutron LBaaS had the following configuration in services_lbaas.conf: [octavia] ...... # Interval in seconds to poll octavia when an entity is created, updated, or # deleted. (integer value) request_poll_interval = 2 # Time to stop polling octavia when a status of an entity does not change. # (integer value) request_poll_timeout = 300 .................................... However, neutron-lbaas seems not to respect the request poll interval and it takes about 15 minutes to create a load balancer+listener+pool+members+hm. Below, you have the timestamps for the API calls made by neutron towards Octavia (extracted with tcpdump when I create a load balancer from horizon GUI): 10.100.0.14 - - [01/Feb/2018 12:11:53] "POST /v1/loadbalancers HTTP/1.1" 202 437 10.100.0.14 - - [01/Feb/2018 12:11:54] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 430 10.100.0.14 - - [01/Feb/2018 12:11:58] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447 10.100.0.14 - - [01/Feb/2018 12:12:00] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447 10.100.0.14 - - [01/Feb/2018 12:14:12] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438 10.100.0.14 - - [01/Feb/2018 12:16:23] "POST /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/listeners HTTP/1.1" 202 445 10.100.0.14 - - [01/Feb/2018 12:16:23] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446 10.100.0.14 - - [01/Feb/2018 12:18:32] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438 10.100.0.14 - - [01/Feb/2018 12:18:37] "POST /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools HTTP/1.1" 202 318 10.100.0.14 - - [01/Feb/2018 12:18:37] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446 10.100.0.14 - - [01/Feb/2018 12:20:46] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438 10.100.0.14 - - [01/Feb/2018 12:23:00] "POST /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members HTTP/1.1" 202 317 10.100.0.14 - - [01/Feb/2018 12:23:00] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446 10.100.0.14 - - [01/Feb/2018 12:23:05] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438 10.100.0.14 - - [01/Feb/2018 12:23:08] "POST /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members HTTP/1.1" 202 316 10.100.0.14 - - [01/Feb/2018 12:23:08] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446 10.100.0.14 - - [01/Feb/2018 12:25:20] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438 10.100.0.14 - - [01/Feb/2018 12:25:23] "POST /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/healthmonitor HTTP/1.1" 202 215 10.100.0.14 - - [01/Feb/2018 12:27:30] "GET /v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 437 It seems that, after 1 or 2 polls, it waits for more than two minutes until the next poll. Is it normal? Has anyone seen this behavior? Thank you, Mihaela Balas _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Thu Feb 1 13:59:14 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 1 Feb 2018 13:59:14 +0000 Subject: [openstack-dev] [vitrage][ptl] PTL candidacy for Rocky Message-ID: <62D46001-D456-42B9-AFD9-4A9312EE21F5@nokia.com> Sorry for the copy&paste problem in the previous email… Ifat On 01/02/2018, 15:24, "Afek, Ifat (Nokia - IL/Kfar Sava)" wrote: Hi all, I would like to announce my candidacy to continue as Vitrage PTL for the Rocky release. I’ve been the PTL of Vitrage since the day it started, in the Mitaka release. I think we have made an amazing journey, and we now have a mature, stable and well known project. During the Queens cycle our community has grown, and we managed to complete many important tasks, like: * API for template add and template delete * Enhancements in the templates language * API for registering web hooks on Vitrage alarms * Performance enhancements, mostly around parallel evaluation of the templates I believe that these new features will greatly improve the usability of Vitrage. As for the Rocky cycle, I think we have many challenging tasks in our road map. We have a great team which combines very experienced contributors and enthusiastic newcomers, and we are always happy to welcome new contributors. The issues that I think we should focus on are: * Alarm and resource aggregation * Proactive RCA (Root Cause Analysis) * RCA history * Kubernetes Support * API enhancements, mostly around the topology queries I look forward to working with you all in the coming cycle. Thanks, Ifat. From derekh at redhat.com Thu Feb 1 14:35:53 2018 From: derekh at redhat.com (Derek Higgins) Date: Thu, 1 Feb 2018 14:35:53 +0000 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud Message-ID: Hi All, I've been working on a set of patches as a WIP to test ironic in the overcloud[1], the approach I've started with is to add ironic into the overcloud controller in scenario004. Also to run a script on the controller (as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be controlled by ironic. The WIP currently replaces the current tempest tests with some commands to sanity test the setup. This essentially works but things need to be cleaned up a bit so I've a few questions o Is scenario004 the correct choice? o Should I create a new tempest test for baremetal as some of the networking stuff is different? o Is running a script on the controller with NodeExtraConfigPost the best way to set this up or should I be doing something with quickstart? I don't think quickstart currently runs things on the controler does it? thanks, Derek. [1] - https://review.openstack.org/#/c/485261 https://review.openstack.org/#/c/509728/ https://review.openstack.org/#/c/509829/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 1 15:36:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 1 Feb 2018 07:36:50 -0800 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On Thu, Feb 1, 2018 at 6:35 AM, Derek Higgins wrote: > Hi All, > I've been working on a set of patches as a WIP to test ironic in the > overcloud[1], the approach I've started with is to add ironic into the > overcloud controller in scenario004. Also to run a script on the controller > (as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be > controlled by ironic. The WIP currently replaces the current tempest tests > with some commands to sanity test the setup. This essentially works but > things need to be cleaned up a bit so I've a few questions > > o Is scenario004 the correct choice? > Because we might increase the timeout risk on scenario004, I would recommend to create a new dedicated scenario that would deploy a very basic overcloud with just ironic + dependencies (keystone, glance, neutron, and nova?) > > o Should I create a new tempest test for baremetal as some of the > networking stuff is different? > I think we would need to run baremetal tests for this new featureset, see existing files for examples. > > o Is running a script on the controller with NodeExtraConfigPost the best > way to set this up or should I be doing something with quickstart? I don't > think quickstart currently runs things on the controler does it? > What kind of thing do you want to run exactly? I'll let the CI squad replies as well but I think we need a new scenario, that we would only run when touching ironic files in tripleo. Using scenario004 really increase the risk of timeout and we don't want it. Thanks for this work! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu Feb 1 15:51:52 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 1 Feb 2018 09:51:52 -0600 Subject: [openstack-dev] [keystone] Queens RC review dashboard Message-ID: <7b4c6301-790d-6c98-ff7a-15a4d312427f@gmail.com> Hey all, Just like with feature freeze, I put together a review dashboard that contains patches we need to land in order to cut a release candidate [0]. I'll be adding more patches throughout the day, but so far there are 21 changes there waiting for review. If there is something I missed, please don't hesitate to ping me and I'll get it added. Thanks for all the hard work. We're on the home stretch! [0] https://goo.gl/XVw3wr -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Thu Feb 1 15:58:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 1 Feb 2018 09:58:58 -0600 Subject: [openstack-dev] [Openstack-operators] LTS pragmatic example In-Reply-To: References: <20171114154658.mpnwfsn7uzmis2l4@redhat.com> <1510675891-sup-8327@lrrr.local> <4ef3b8ff-5374-f440-5595-79e1d33ce3bb@switch.ch> <332af66b-320f-bda4-495f-870dd9e10349@gmail.com> <54c9afed-129b-914c-32f4-451dbdf41279@switch.ch> <053ab4a2-4159-ce84-5f1d-f88d1445f063@gmail.com> Message-ID: <93c25304-e992-1f4b-ece8-2ef33fd00132@gmail.com> On 2/1/2018 2:56 AM, Saverio Proto wrote: > Hello ! > > thanks for accepting the patch :) > > It looks like the best is always to send an email and have a short > discussion together, when we are not sure about a patch. > > thank you > > Cheers, > > Saverio > There is also the #openstack-stable IRC channel if you want to get a faster response without having to go to the mailing list. Feel free to ping me there anytime about stable patch questions. -- Thanks, Matt From derekh at redhat.com Thu Feb 1 16:05:34 2018 From: derekh at redhat.com (Derek Higgins) Date: Thu, 1 Feb 2018 16:05:34 +0000 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On 1 February 2018 at 15:36, Emilien Macchi wrote: > > > On Thu, Feb 1, 2018 at 6:35 AM, Derek Higgins wrote: > >> Hi All, >> I've been working on a set of patches as a WIP to test ironic in the >> overcloud[1], the approach I've started with is to add ironic into the >> overcloud controller in scenario004. Also to run a script on the controller >> (as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be >> controlled by ironic. The WIP currently replaces the current tempest tests >> with some commands to sanity test the setup. This essentially works but >> things need to be cleaned up a bit so I've a few questions >> >> o Is scenario004 the correct choice? >> > > Because we might increase the timeout risk on scenario004, I would > recommend to create a new dedicated scenario that would deploy a very basic > overcloud with just ironic + dependencies (keystone, glance, neutron, and > nova?) > Ok, I can do this > > >> >> o Should I create a new tempest test for baremetal as some of the >> networking stuff is different? >> > > I think we would need to run baremetal tests for this new featureset, see > existing files for examples. > Do you mean that we should use existing tests somewhere or create new ones? > > >> >> o Is running a script on the controller with NodeExtraConfigPost the best >> way to set this up or should I be doing something with quickstart? I don't >> think quickstart currently runs things on the controler does it? >> > > What kind of thing do you want to run exactly? > The contents to this file will give you an idea, somewhere I need to setup a node that ironic will control with ipmi https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml > I'll let the CI squad replies as well but I think we need a new scenario, > that we would only run when touching ironic files in tripleo. Using > scenario004 really increase the risk of timeout and we don't want it. > Ok > > Thanks for this work! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Feb 1 16:05:53 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 2 Feb 2018 00:05:53 +0800 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: <40e91c82-e6c4-65bd-f9b0-3b827c7629e6@gmail.com> <0c0807a5-86e7-91cd-39cc-fc0129052d9c@redhat.com> Message-ID: > > Fair point. When the "VM/baremetal workgroup" was originally formed, > the goal was more about building clouds with both types of resources, > making them behave similarly from a user perspective, etc. Somehow > we got into talking applications and these other topics came up, which > seemed more interesting/pressing to fix. :) > > Maybe "cross-project identity integration" or something is a better name? Cloud-Native Application IMO is one of the ways to see the flow for both VM/Baremetal. But It's true if we can have more specific goal coss project to make sure we're marching to that goal (which `VM/baremetal workgroup` formed for) will be even better. Instead of modifying the name, I do prefer if we can spend some time to trace current flow and come out with specific targets for teams to work on in rocky to allow building both types of resources and feel like same flow to user, and which of cause includes what keystone already started. So other than topics Collen mentioned above (and I think they all great), we should focus working on what topics we can comes out from here (I think that's why Collen start this ML). Ideas? -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 1 16:18:54 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 1 Feb 2018 08:18:54 -0800 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins wrote: [...] > o Should I create a new tempest test for baremetal as some of the >>> networking stuff is different? >>> >> >> I think we would need to run baremetal tests for this new featureset, see >> existing files for examples. >> > Do you mean that we should use existing tests somewhere or create new > ones? > I mean we should use existing tempest tests from ironic, etc. Maybe just a baremetal scenario that spawn a baremetal server and test ssh into it, like we already have with other jobs. o Is running a script on the controller with NodeExtraConfigPost the best >>> way to set this up or should I be doing something with quickstart? I don't >>> think quickstart currently runs things on the controler does it? >>> >> >> What kind of thing do you want to run exactly? >> > The contents to this file will give you an idea, somewhere I need to setup > a node that ironic will control with ipmi > https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml > extraconfig works for me in that case, I guess. Since we don't productize this code and it's for CI only, it can live here imho. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Feb 1 17:04:45 2018 From: msm at redhat.com (michael mccune) Date: Thu, 1 Feb 2018 12:04:45 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <0ef8e077-57ea-0d86-d143-d5f5563e587c@redhat.com> Greetings OpenStack community, Today's meeting was primarily focused on a request for guidance related to action endpoints and on planning topics for the upcoming PTG. Tommy Hu has sent an email to the developer list[7] describing how several types of actions are currently being handled through the cinder and nova REST interfaces. In specific this is related to how APIs are registered with a gateway service. The current methodology within cinder and nova has been to use generic action endpoints, allowing the body of the request to further define the action. These overloaded endpoints cause difficulty when using an API gateway. The SIG has taken up discussion about how this could be improved and what guidance can be created for the community. Although no firm plan has been derived yet, the SIG will join the conversation on the mailing list and also discuss the wider topic of actions at the PTG. On the topic of the PTG, the SIG has created an etherpad[8] where agenda items are starting to be proposed. If you have any topic that you would like to discuss, or see discussed, please add it to that etherpad. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 * WIP: Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126334.html [8[ https://etherpad.openstack.org/p/api-sig-ptg-rocky Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From pkovar at redhat.com Thu Feb 1 17:07:33 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 1 Feb 2018 18:07:33 +0100 Subject: [openstack-dev] [docs] Core review stats for February Message-ID: <20180201180733.2764810106a165b559d6b93e@redhat.com> Hi all, This is more of an FYI for people interested in all things docs that the docs core team agreed on opening up the process for new docs core nominations or removals. Instead of using a private list, this will now be discussed in public, using the openstack-dev list, as documented here: https://docs.openstack.org/doc-contrib-guide/docs-review.html#achieving-core-reviewer-status The docs core team is the core for openstack-manuals, openstackdocstheme, and openstack-doc-tools, and, as a group member, also for subteam repos organized under the Docs project, such as contributor-guide or security-doc. For February, I don't recommend any changes to the core team, which is now pretty stable. If you have any suggestions, please let us know, preferably, in this thread. Thanks, pk From doug at doughellmann.com Thu Feb 1 17:16:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 1 Feb 2018 12:16:30 -0500 Subject: [openstack-dev] [Release-job-failures] [i18n][senlin][release] Tag of openstack/python-senlinclient failed In-Reply-To: References: Message-ID: <0C150EA1-FFB7-48D9-9631-84DE0E54E24E@doughellmann.com> Excerpts from zuul's message of 2018-02-01 17:03:06 +0000: > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/f8/f84d8220a3df4421c1cfa7ee7b1e551b57c3505d/tag/publish-openstack-releasenotes/49c0e16/ : POST_FAILURE in 5m 48s > This failure to build the senlin client release notes appears to have something to do with the internationalization setup. It is looking for a CSS file under the fr translation, for some reason. Perhaps this is related to the race condition we know that the publish jobs have? Doug rsync: failed to set permissions on "/afs/.openstack.org/docs/releasenotes/python-senlinclient/fr/_static/css/.bootstrap.css.nwixts": No such file or directory (2) rsync: rename "/afs/.openstack.org/docs/releasenotes/python-senlinclient/fr/_static/css/.bootstrap.css.nwixts" -> "fr/_static/css/bootstrap.css": No such file or directory (2) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1] Traceback (most recent call last): File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 115, in main() File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 110, in main output = afs_sync(p['source'], p['target']) File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 95, in afs_sync output['output'] = subprocess.check_output(shell_cmd, shell=True) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '/bin/bash -c "mkdir -p /afs/.openstack.org/docs/releasenotes/python-senlinclient/ && /usr/bin/rsync -rtp --safe-links --delete-after --out-format='<>%i %n%L' --filter='merge /tmp/tmp9i7el2ow' /var/lib/zuul/builds/49c0e164949c43b68c05856f6cc6452e/work/artifacts/ /afs/.openstack.org/docs/releasenotes/python-senlinclient/"' returned non-zero exit status 23 _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures From johnsomor at gmail.com Thu Feb 1 18:18:31 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 1 Feb 2018 10:18:31 -0800 Subject: [openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected In-Reply-To: <10892_1517493221_5A731BE5_10892_333_15_849F1D1DBD4A00479343403412AE4F8201AB0BE3B3@ESSEN.office.orange.intra> References: <10892_1517493221_5A731BE5_10892_333_15_849F1D1DBD4A00479343403412AE4F8201AB0BE3B3@ESSEN.office.orange.intra> Message-ID: Hi Mihaela, The polling logic that the neutron-lbaas octavia driver uses to update the neutron database is as follows: Once a Create/Update/Delete action is executed against a load balancer using the Octavia driver a polling thread is created. On every request_poll_interval the thread queries the Octavia v1 API to check the status of the object modified. It will save the updated state in the neutron databse and exit if the objects provisioning status becomes on of: "ACTIVE", "DELETED", or "ERROR". It will repeat this polling until one of those provisioning statuses is met, or the request_poll_timeout is exceeded. My suspicion is the GET requests you are seeing for those objects is occurring from another source. You can test this by running neutron-lbaas in debug mode. I will then log a debug message for every polling interval. The code for this thread is located here: https://github.com/openstack/neutron-lbaas/blob/stable/ocata/neutron_lbaas/drivers/octavia/driver.py#L66 Michael From waboring at hemna.com Thu Feb 1 18:26:12 2018 From: waboring at hemna.com (Walter Boring) Date: Thu, 1 Feb 2018 13:26:12 -0500 Subject: [openstack-dev] [nova][cinder] Questions about truncked disk serial number In-Reply-To: <1517409417.32220.10.camel@cloudbasesolutions.com> References: <1517409417.32220.10.camel@cloudbasesolutions.com> Message-ID: Yuk. So looking back in the nova libvirt volume code, it looks like nova is ignoring almost all of the device_info coming back from os-brick's connect_volume() call. os-brick has the scsi id here: https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L552 My guess is though, that nova uses the cinder UUID, in the domain xml so it can use it later at some point? Walt On Wed, Jan 31, 2018 at 9:36 AM, Lucian Petrut < lpetrut at cloudbasesolutions.com> wrote: > Actually, when using the libvirt driver, the SCSI id returned by os-brick > is not exposed to the guest. The reason is that Nova explicitly sets the > volume id as "serial" id in the guest disk configuration. Qemu will expose > this to the guest, but with a 20 character limit. > > For what is worth, Kubernetes as well as some guides rely on this > behaviour. > > For example: > > nova volume-attach e03303e1-c20b-441c-b94a-724cb2469487 > 10780b60-ad70-479f-a612-14d03b1cc64d > virsh dumpxml `nova show cirros | grep instance_name | cut -d "|" -f 3` > > > instance-0000013d > e03303e1-c20b-441c-b94a-724cb2469487 > .... > > > > > > * 10780b60-ad70-479f-a612-14d03b1cc64d* > >
function='0x0'/> > > > nova log: > Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG > os_brick.initiator.connectors.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 > admin admin] <== connect_volume: return (2578ms) {'path': u'/dev/sdb', > 'scsi_wwn': u'360000000000000000e00000000010001', 'type': u'block'} > {{(pid=46142) trace_logging_wrapper /usr/local/lib/python2.7/dist- > packages/os_brick/utils.py:170}} > Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG > nova.virt.libvirt.volume.iscsi [None req-d0c62440-133c-4e89-8798-20278ca50f00 > admin admin] Attached iSCSI volume {'path': u'/dev/sdb', 'scsi_wwn': ' > 360000000000000000e00000000010001', 'type': 'block'} {{(pid=46142) > connect_volume /opt/stack/nova/nova/virt/libvirt/volume/iscsi.py:65}} > Jan 31 15:39:54 ubuntu nova-compute[46142]: DEBUG nova.virt.libvirt.guest > [None req-d0c62440-133c-4e89-8798-20278ca50f00 admin admin] attach device > xml: > Jan 31 15:39:54 ubuntu nova-compute[46142]: type="raw" cache="none" io="native"/> > Jan 31 15:39:54 ubuntu nova-compute[46142]: > Jan 31 15:39:54 ubuntu nova-compute[46142]: dev="vdb"/> > Jan 31 15:39:54 ubuntu nova-compute[46142]: 10780b60-ad70-479f- > a612-14d03b1cc64d > Jan 31 15:39:54 ubuntu nova-compute[46142]: > Jan 31 15:39:54 ubuntu nova-compute[46142]: {{(pid=46142) attach_device > /opt/stack/nova/nova/virt/libvirt/guest.py:302}} > > Regards, > Lucian Petrut > > On Wed, 2018-01-31 at 07:59 -0500, Walter Boring wrote: > > First off, the id's you are showing there are Cinder uuid's to identify > the volumes in the cinder DB and are used for cinder based actions. The > Ids that are seen and used by the system for discovery and passing to qemu > are the disk SCSI ids, which are embedded in the volume's themselves. > os-brick returns the SCSI id to nova for use in attaching and it's not > limited to the 20 characters. > > > > On Tue, Jan 16, 2018 at 4:19 AM, Yikun Jiang wrote: > > Some detail steps as below: > 1. First, We have 2 volumes with same part-uuid prefix. > [image: 内嵌图片 1] > > volume(yikun2) is attached to server(test) > > 2. In GuestOS(Cent OS 7), take a look at by path and by id: > [image: 内嵌图片 2] > we found both by-path and by-id vdb links was generated successfully. > > 3. attach volume(yikun2_1) to server(test) > [image: 内嵌图片 4] > > 4. In GuestOS(Cent OS 7), take a look at by path and by id: > > [image: 内嵌图片 6] > > by-path soft link was generated successfully, but by-id link was failed > to generate. > *That is, in this case, if a user find the device by by-id, it would be > failed to find it or find a wrong device.* > > one of the user cases was happened on k8s device finding, more info you > can see the ref as below: > https://github.com/kubernetes/kubernetes/blob/53a8ac753bf468 > eaf6bcb5a07e34a0a67480df43/pkg/cloudprovider/providers/ > openstack/openstack_volumes.go#L463 > > So, I think by-id is NOT a good way to find the device, but what the best > practice is? let's see other idea. > > Regards, > Yikun > > ---------------------------------------- > Jiang Yikun(Kero) > Mail: yikunkero at gmail.com > > 2018-01-16 14:36 GMT+08:00 Zhenyu Zheng : > > Ops, forgot references: > [1] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6 > 95bbc5cff3a2bf3e2efd25f/include/uapi/linux/virtio_blk.h#L54 > [2] https://github.com/torvalds/linux/blob/1cc15701cd89b0ce6 > 95bbc5cff3a2bf3e2efd25f/drivers/block/virtio_blk.c#L363 > > On Tue, Jan 16, 2018 at 2:35 PM, Zhenyu Zheng > wrote: > > Hi, > > I meet a problem like this recently: > > When attaching a volume to an instance, in the xml, the disk is described > as: > > [image: Inline image 1] > where the serial number here is the volume uuid in Cinder. While inside > the vm: > in /dev/disk/by-id, there is a link for /vdb with the name of > "virtio"+truncated serial number: > > [image: Inline image 2] > > and according to https://access.redhat.com/documentation/en-US/Red_Hat_Ent > erprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch16s03.html > > it seems that we will use this mount the volume. > > The truncate seems to be happen in here [1][2] which is 20 digits. > > *My question here is: *if two volume have the identical first 20 digits > in their uuids, it seems that the latter attached one will overwrite the > first one's link: > [image: Inline image 3] > (the above graph is snapshot for an volume backed instance, the > virtio-15exxxxx was point to vda before, the by-path seems correct though) > > It is rare to have the identical first 20 digits of two uuids, but > possible, so what was the consideration of truncate only 20 digits of the > volume uuid instead of use full 32? > > BR, > > Kevin Zheng > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19561 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46374 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9638 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 13550 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10798 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 5228 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21095 bytes Desc: not available URL: From zbitter at redhat.com Thu Feb 1 18:43:36 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 1 Feb 2018 13:43:36 -0500 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <87zi51v5uu.fsf@meyer.lemoncheese.net> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> Message-ID: <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> On 25/01/18 19:08, James E. Blair wrote: > Mathieu Gagné writes: > >> On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec wrote: >>> >>> >>> I'm curious what this means as far as best practices for inter-patch >>> references. In the past my understanding was the the change id was >>> preferred, both because if gerrit changed its URL format the change id links >>> would be updated appropriately, and also because change ids can be looked up >>> offline in git commit messages. Would that still be the case for everything >>> except depends-on now? > > Yes, that's a down-side of URLs. I personally think it's fine to keep > using change-ids for anything other than Depends-On, though in many of > those cases the commit sha may work as well. > >> That's my concern too. Also AFAIK, Change-Id is branch agnostic. This >> means you can more easily cherry-pick between branches without having >> to change the URL to match the new branch for your dependencies. > > Yes, there is a positive and negative aspect to this issue. > > On the one hand, for those times where it was convenient to say "depend > on this change in all its forms across all branches of all projects", > one must now add a URL for each. > > On the other hand, with URLs, it is now possible to indicate that a > change specifically depends on another change targeted to one branch, or > targeted to several branches. Simply list each URL (or don't) as > appropriate. That wasn't possible before -- it wall all or none. Yeah, it's definitely nice to have that flexibility. e.g. here is a patch that wouldn't merge for 3 months because the thing it was dependent on also got proposed as a backport: https://review.openstack.org/#/c/514761/1 From an OpenStack perspective, it would be nice if a Gerrit ID implied a change from the same Gerrit instance as the current repo and the same branch as the current patch if it exists (otherwise any branch), and we could optionally use a URL instead to select a particular change. It's not obvious to me that that'd be the wrong thing for a tool that works across multiple Gerrit instances and/or other backends either, but I'm sure y'all have thought about it in more depth than I have. cheers, Zane. From corvus at inaugust.com Thu Feb 1 18:55:33 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 01 Feb 2018 10:55:33 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> (Zane Bitter's message of "Thu, 1 Feb 2018 13:43:36 -0500") References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> Message-ID: <871si4czfe.fsf@meyer.lemoncheese.net> Zane Bitter writes: > Yeah, it's definitely nice to have that flexibility. e.g. here is a > patch that wouldn't merge for 3 months because the thing it was > dependent on also got proposed as a backport: > > https://review.openstack.org/#/c/514761/1 > > From an OpenStack perspective, it would be nice if a Gerrit ID implied > a change from the same Gerrit instance as the current repo and the > same branch as the current patch if it exists (otherwise any branch), > and we could optionally use a URL instead to select a particular > change. Yeah, that's reasonable, and it is similar to things Zuul does in other areas, but I think one of the thing we want to do with Depends-On is consider that Zuul isn't the only audience. It's there just as much for the reviewers, and other folks. So when it comes to Gerrit change ids, I feel we had to constrain it to Gerrit's own behavior. When you click on one of those in Gerrit, it shows you all of the changes across all of the repos and branches with that change-id. So that result list is what Zuul should work with. Otherwise there's a discontinuity between what a user sees when they click the hyperlink under the change-id and what Zuul does. Similarly, in the new system, you click the URL and you see what Zuul is going to use. And that leads into the reason we want to drop the old syntax: to make it seamless for a GitHub user to know how to Depends-On a Gerrit change, and vice versa, with neither requiring domain-specific knowledge about the system. -Jim From pkovar at redhat.com Thu Feb 1 19:09:22 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 1 Feb 2018 20:09:22 +0100 Subject: [openstack-dev] [docs][ptl] PTL candidacy for Docs Message-ID: <20180201200922.845628d4bd913cf623ed861c@redhat.com> Hi all, I'd like to announce my candidacy for PTL of the Docs project for Rocky. I've been the Docs PTL since Queens and besides my work on OpenStack docs, I also contribute to the RDO Project. During the Queens cycle, we mostly finalized our work on project docs migration, we also continued assisting project teams with their setup for project-specific content, we improved our template system for docs.openstack.org, stopped unpublishing EOL content, and more. We now also have a docs mission statement to help us identify project goals within a broader OpenStack context. For Rocky, we need to review and revisit the team goals and continue working on areas like docs theme and build automation, alongside the content restructure and rework of what is left in openstack-manuals. Our Rocky PTG planning is well underway but I think it is now more important than ever that we keep the project as open as possible to all potential documentation contributors, regardless of whether they attend in-person events or not, this also includes drive-by contributions. Thank you, pk From sean.mcginnis at gmx.com Thu Feb 1 19:35:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Feb 2018 13:35:05 -0600 Subject: [openstack-dev] [telemetry][heat][mistral][sdk][searchlight][senlin][tacker][tricircle][tripleo] Missing Queens releases In-Reply-To: <20180131210344.GA32139@sm-xps> References: <20180131210344.GA32139@sm-xps> Message-ID: <20180201193504.GA26084@sm-xps> Just confirming and closing things out. We did not receive any negative responses to the plan below, so a little earlier today I approved the mentioned patch and we cut releases and branches for all libs. The next step if for these new versions to pass CI and get FFEs to raise the upper constraints for them past our requirements freeze. That official request will be coming shortly. Sean On Wed, Jan 31, 2018 at 03:03:44PM -0600, Sean McGinnis wrote: > While reviewing Queens release deliverables and preparing missing stable/queens > branches, we have identified several libraries that have not had any Queens > releases. > > In the past, we have stated we would force a release for any missing > deliverables in order to have a clear branching point. We considered tagging > the base of the stable/pike branch again and starting a new stable/queens > branch from there, but that doesn't work for several technical reasons the most > important of which is that the queens release would not include any changes > that had been backported to stable/pike, and we have quite a few of those. So, > we are left with 2 choices: do not release these libraries at all for queens, > or release from HEAD on master. Skipping the releases entirely will make it > difficult to provide bug fixes in these libraries over the life of the queens > release so, although it is potentially disruptive, we plan to release from HEAD > on master. We will rely on the constraints update mechanism to protect the gate > if the new releases introduce bugs and teams will be able to fix those problems > on the new stable/queens branch and then release a new version. > > See https://review.openstack.org/#/c/539657/ and the notes below for details of > what will be tagged. > > ceilometermiddleware > -------------------- > > Mostly doc and CI related changes, but the "Retrieve project id to ignore from > keystone" commit (e2bf485) looks like it may be important. > > Heat > ---- > > heat-translator > There are quite a few bug fixes and feature changes merged that have not been > released. It is currently marked with a type of "library", but we will change > this to "other" and require a release by the end of the cycle (see > https://review.openstack.org/#/c/539655/ for that change). Based on the README > description, this appears to be a command line and therefore should maybe have > a type of "client-library", but "other" would work as far as release process > goes. Since this is kind of a special command line, perhaps "other" would be > the correct type going forward, but we will need input from the Heat team on > that. > > python-heatclient > Only reno updates, so a new release on master should not be very disruptive. > > tosca-parser > Several unreleased bug fixes and feature changes. Consumed by heat-translator > and tacker, so there is some risk in releasing it this late. > > > Mistral > ------- > > mistral-lib > Mostly packaging and build changes, with a couple of fixes. It is used by > mistral and tripleo-common. > > SDK > --- > > requestsexceptions > No changes this cycle. We will branch stable/queens from the same point as > stable/pike. > > Searchlight > ----------- > > python-searchlightclient > Only doc and g-r changes. Since the risk here is low, we are going to release > from master and branch from there. > > Senlin > ------ > > python-senlinclient > Just one bug fix. This is a dependency for heat, mistral, openstackclient, > python-openstackclient, rally, and senlin-dashboard. The one bug fix looks > fairly safe though, so we are going to release from master and branch from > there. > > Tacker > ------ > > python-tackerclient > Many feature changes and bug fixes. This impacts mistral and tacker. > > Tricircle > --------- > > python-tricircleclient > One feature and several g-r changes. > > > Please respond here, comment on the patch, or hit us up in #openstack-release > if you have any questions or concerns. > > Thanks, > Sean McGinnis (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Thu Feb 1 19:44:19 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Feb 2018 13:44:19 -0600 Subject: [openstack-dev] [requirements] FFE for delayed libraries Message-ID: <20180201194419.GA26817@sm-xps> Due to gate issues and other delays, there's quite a handful of libs that were not released in time for the requirements freeze. We now believe we've gotten all libraries processed for the final Queens releases. In order to reduce the load, we have batches all upper constraints bumps for these libs into one patch: https://review.openstack.org/#/c/540105/ This is my official FFE request to have these updates accepted yet for Queens past the requirements freeze. If anyone is aware of any issues with these, please bring that to our attention as soon as possible. Thanks, Sean Affected Updates ---------------- update constraint for python-saharaclient to new release 1.5.0 update constraint for instack-undercloud to new release 8.2.0 update constraint for paunch to new release 2.2.0 update constraint for python-mistralclient to new release 3.2.0 update constraint for python-senlinclient to new release 1.7.0 update constraint for pycadf to new release 2.7.0 update constraint for os-refresh-config to new release 8.2.0 update constraint for tripleo-common to new release 8.4.0 update constraint for reno to new release 2.7.0 update constraint for os-net-config to new release 8.2.0 update constraint for os-apply-config to new release 8.2.0 update constraint for os-client-config to new release 1.29.0 update constraint for ldappool to new release 2.2.0 update constraint for aodhclient to new release 1.0.0 update constraint for python-searchlightclient to new release 1.3.0 update constraint for mistral-lib to new release 0.4.0 update constraint for os-collect-config to new release 8.2.0 update constraint for ceilometermiddleware to new release 1.2.0 update constraint for tricircleclient to new release 0.3.0 update constraint for requestsexceptions to new release 1.4.0 update constraint for python-magnumclient to new release 2.8.0 update constraint for tosca-parser to new release 0.9.0 update constraint for python-tackerclient to new release 0.11.0 update constraint for python-heatclient to new release 1.14.0 From prometheanfire at gentoo.org Thu Feb 1 19:47:30 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 1 Feb 2018 13:47:30 -0600 Subject: [openstack-dev] [requirements] FFE for delayed libraries In-Reply-To: <20180201194419.GA26817@sm-xps> References: <20180201194419.GA26817@sm-xps> Message-ID: <20180201194730.od4b666jjcagpmo4@gentoo.org> On 18-02-01 13:44:19, Sean McGinnis wrote: > Due to gate issues and other delays, there's quite a handful of libs that were > not released in time for the requirements freeze. > > We now believe we've gotten all libraries processed for the final Queens > releases. In order to reduce the load, we have batches all upper constraints > bumps for these libs into one patch: > > https://review.openstack.org/#/c/540105/ > > This is my official FFE request to have these updates accepted yet for Queens > past the requirements freeze. > > If anyone is aware of any issues with these, please bring that to our attention > as soon as possible. > > Thanks, > Sean > > > Affected Updates > ---------------- > > update constraint for python-saharaclient to new release 1.5.0 > update constraint for instack-undercloud to new release 8.2.0 > update constraint for paunch to new release 2.2.0 > update constraint for python-mistralclient to new release 3.2.0 > update constraint for python-senlinclient to new release 1.7.0 > update constraint for pycadf to new release 2.7.0 > update constraint for os-refresh-config to new release 8.2.0 > update constraint for tripleo-common to new release 8.4.0 > update constraint for reno to new release 2.7.0 > update constraint for os-net-config to new release 8.2.0 > update constraint for os-apply-config to new release 8.2.0 > update constraint for os-client-config to new release 1.29.0 > update constraint for ldappool to new release 2.2.0 > update constraint for aodhclient to new release 1.0.0 > update constraint for python-searchlightclient to new release 1.3.0 > update constraint for mistral-lib to new release 0.4.0 > update constraint for os-collect-config to new release 8.2.0 > update constraint for ceilometermiddleware to new release 1.2.0 > update constraint for tricircleclient to new release 0.3.0 > update constraint for requestsexceptions to new release 1.4.0 > update constraint for python-magnumclient to new release 2.8.0 > update constraint for tosca-parser to new release 0.9.0 > update constraint for python-tackerclient to new release 0.11.0 > update constraint for python-heatclient to new release 1.14.0 > officially accepted, thanks for keeping me updated while this was going on. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ed at leafe.com Thu Feb 1 20:54:48 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 1 Feb 2018 14:54:48 -0600 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: References: Message-ID: On Jan 18, 2018, at 4:07 AM, TommyLike Hu wrote: > Recently We found an issue related to our OpenStack action APIs. We usually expose our OpenStack APIs by registering them to our API Gateway (for instance Kong [1]), but it becomes very difficult when regarding to action APIs. We can not register and control them seperately because them all share the same request url which will be used as the identity in the gateway service, not say rate limiting and other advanced gateway features, take a look at the basic resources in OpenStack We discussed your email at today’s API-SIG meeting [0]. This is an area that is always contentious in the RESTful world. Actions, tasks, and state changes are not actual resources, and in a pure REST design they should never be part of the URL. Instead, you should POST to the actual resource, with the desired action in the body. So in your example: > URL:/volumes/{volume_id}/action > BODY:{'extend':{}} the preferred way of achieving this is: URL: POST /volumes/{volume_id} BODY: {‘action’: ‘extend’, ‘params’: {}} The handler for the POST action should inspect the body, and call the appropriate method. Having said that, we realize that a lot of OpenStack services have adopted the more RPC-like approach that you’ve outlined. So while we strongly recommend a standard RESTful approach, if you have already released an RPC-like API, our advice is: a) avoid having every possible verb in the URL. In other words, don’t use: /volumes/{volume_id}/mount /volumes/{volume_id}/umount /volumes/{volume_id}/extend This moves you further into RPC-land, and will make updating your API to a more RESTful design more difficult. b) choose a standard term for the item in the URL. In other words, always use ‘action’ or ‘task’ or whatever else you have adopted. Don’t mix terminology. Then pass the action to perform, along with any parameters in the body. This will make it easier to transition to a RESTful design by later updating the handlers to first inspect the BODY instead of relying upon the URL to determine what action to perform. You might also want to contact the Kong developers to see if there is a way to work with a RESTful API design. -- Ed Leafe [0] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-01-16.02.log.html#l-28 From thierry at openstack.org Thu Feb 1 21:02:41 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 1 Feb 2018 22:02:41 +0100 Subject: [openstack-dev] [ptg] Dublin PTG schedule up Message-ID: Hi everyone, The schedule for the Dublin PTG is now posted on the PTG website: https://www.openstack.org/ptg#tab_schedule I'll post on this thread if anything changes, but it's pretty unlikely at this point. Note that we have a lot of available rooms on Monday/Tuesday to discuss additional topics. If you think of something we should really take half a day to discuss, please add it to the following etherpad: https://etherpad.openstack.org/p/PTG-Dublin-missing-topics If there is consensus it's a good topic and we agree on a time where to fit it, we could add it to the schedule. For smalled things (like 90-min discussions) we can book time dynamically during the event thanks to the new PTGbot features. See you there ! -- Thierry Carrez (ttx) From mordred at inaugust.com Thu Feb 1 21:10:40 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 1 Feb 2018 15:10:40 -0600 Subject: [openstack-dev] [requirements] FFE for delayed libraries In-Reply-To: <20180201194730.od4b666jjcagpmo4@gentoo.org> References: <20180201194419.GA26817@sm-xps> <20180201194730.od4b666jjcagpmo4@gentoo.org> Message-ID: On 02/01/2018 01:47 PM, Matthew Thode wrote: > On 18-02-01 13:44:19, Sean McGinnis wrote: >> Due to gate issues and other delays, there's quite a handful of libs that were >> not released in time for the requirements freeze. >> >> We now believe we've gotten all libraries processed for the final Queens >> releases. In order to reduce the load, we have batches all upper constraints >> bumps for these libs into one patch: >> >> https://review.openstack.org/#/c/540105/ >> >> This is my official FFE request to have these updates accepted yet for Queens >> past the requirements freeze. >> >> If anyone is aware of any issues with these, please bring that to our attention >> as soon as possible. >> >> Thanks, >> Sean >> >> >> Affected Updates >> ---------------- >> >> update constraint for python-saharaclient to new release 1.5.0 >> update constraint for instack-undercloud to new release 8.2.0 >> update constraint for paunch to new release 2.2.0 >> update constraint for python-mistralclient to new release 3.2.0 >> update constraint for python-senlinclient to new release 1.7.0 >> update constraint for pycadf to new release 2.7.0 >> update constraint for os-refresh-config to new release 8.2.0 >> update constraint for tripleo-common to new release 8.4.0 >> update constraint for reno to new release 2.7.0 >> update constraint for os-net-config to new release 8.2.0 >> update constraint for os-apply-config to new release 8.2.0 >> update constraint for os-client-config to new release 1.29.0 >> update constraint for ldappool to new release 2.2.0 >> update constraint for aodhclient to new release 1.0.0 >> update constraint for python-searchlightclient to new release 1.3.0 >> update constraint for mistral-lib to new release 0.4.0 >> update constraint for os-collect-config to new release 8.2.0 >> update constraint for ceilometermiddleware to new release 1.2.0 >> update constraint for tricircleclient to new release 0.3.0 >> update constraint for requestsexceptions to new release 1.4.0 >> update constraint for python-magnumclient to new release 2.8.0 >> update constraint for tosca-parser to new release 0.9.0 >> update constraint for python-tackerclient to new release 0.11.0 >> update constraint for python-heatclient to new release 1.14.0 >> > > officially accepted, thanks for keeping me updated while this was going > on. > After the release of openstacksdk 0.11.1, we got a bug report: https://bugs.launchpad.net/python-openstacksdk/+bug/1746535 about a regression with python-openstackclient and query parameters. The fix was written, landed, backported to stable/queens and released. I'd like to request we add 0.11.2 to the library FFE. Thanks! Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Thu Feb 1 22:42:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Feb 2018 16:42:10 -0600 Subject: [openstack-dev] [release] Release countdown for week R-3, February 3 - 9 Message-ID: <20180201224210.GA9570@sm-xps> We are already starting on RC week. Time flies when you're having fun. Development Focus ----------------- The Release Candidate (RC) deadline is this Thursday, the 8th. Work should be focused on any release-critical bugs and wrapping up and remaining feature work. General Information ------------------- All cycle-with-milestones and cycle-with-intermediary projects should cut their stable/queens branch by the end of this week. This branch will track the Queens release. Once stable/queens has been created, master will will be ready to switch to Rocky development. While master will no longer be frozen, please prioritize any work necessary for completing Queens plans. Changes can be merged into stable/queens as needed if deemed necessary for an RC2. Once Queens is released, stable/queens will also be ready for any stable point releases. Whether fixing something for another RC, or in preparation of a future stable release, fixes must be merged to master first, then backported to stable/queens. Actions --------- cycle-with-milestones deliverables should post an RC1 to openstack/releases using the version format X.Y.Z.0rc1 along with branch creation from this point. The deliverable changes should look something like: releases: - projects: - hash: 90f3ed251084952b43b89a172895a005182e6970 repo: openstack/example version: 1.0.0.0rc1 branches: - name: stable/queens location: 1.0.0.0rc1 Other cycle deliverables (not *-with-milestones) will look the same, but with your normal versioning. For deliverables with release notes, you may also want to add, or update, your release notes links in the deliverable file to something like: release-notes: https://docs.openstack.org/releasenotes/example/queens.html And one more reminder, please add what highlights you want for your project team in the cycle highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Upcoming Deadlines & Dates -------------------------- Rocky PTL nominations: January 29 - February 1 Rocky PTL election: February 7 - 14 OpenStack Summit Vancouver CFP deadline: February 8 Rocky PTG in Dublin: Week of February 26, 2018 Queens cycle-trailing RC deadline: March 1 -- Sean McGinnis (smcginnis) From anne at openstack.org Thu Feb 1 22:47:53 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 1 Feb 2018 14:47:53 -0800 Subject: [openstack-dev] [release][PTL] Cycle highlights reminder In-Reply-To: References: <20171214202419.GA23231@sm-xps> <9af6f8cf-3839-1e63-e78d-a3d779a807e5@gmail.com> Message-ID: <5C368A9C-0F87-4CE9-8159-E5882893C5A7@openstack.org> Hi all, With Queens-3 behind us and RC1 coming up, wanted to give a gentle reminder about the cycle-highlights. To get the party started, I added an example highlight for Cinder, Horizon, Ironic and Nova (modify as necessary!): https://review.openstack.org/#/c/540171/ Hopefully this is a fairly painless process that comes with the great reward of not answering “What changed in this release?” five times over to various marketing and press arms. I’m definitely looking to refine how we handle release communications, so come find me in Dublin with all your feedback and suggestions! Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | 206-992-7961 > On Dec 22, 2017, at 1:06 AM, Thierry Carrez wrote: > > Matt Riedemann wrote: >> On 12/14/2017 2:24 PM, Sean McGinnis wrote: >>> Hey all, >>> >>> As we get closer to Queens-3 and our final RCs, I wanted to remind >>> everyone >>> about the new 'cycle-highlights' we have added to our deliverable info. >>> >>> Background >>> ---------- >>> >>> As a reminder on the background, we were finding that a lot of PTLs were >>> getting pings several times at the end of every release cycle by >>> various folks >>> asking for highlights of what was new and what significant changes >>> were coming >>> in the new release. It was often the same answer to journalists, product >>> managers, and others that needed to compile that info. >>> >>> To try to mitigate that somewhat, we've built in the ability to >>> capture these >>> highlights as part of the release. It get compiled and published to >>> the web >>> site so we have one place to point these folks to. It is intended as a >>> place >>> where they can get the basic info they need, not as a complete marketing >>> message. >>> >>> As you prepare for upcoming releases, please start to consider what >>> you might >>> want to show up in this collection. We ideally want just a few >>> highlights, >>> probably no more than 3 or 4 in most cases, from each project team. >>> [...] > >> I didn't see this before the q1 or q2 tags - can the cycle highlights be >> applied retroactively? > > Cycle highlights are a once-at-the-end-of-the-cycle thing, not a > per-milestone or per-intermediary-release thing. So you don't need to > apply anything retroactively for the q1 or q2 milestones. > > Basically near the end of the cycle, you look back at what got done in > the past 6 months and extract a few key messaging points. Then we build > a page with all the answers and point all marketing people to it -- > which should avoid duplication of effort in answering a dozen separate > information requests. > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at oliver.net.au Thu Feb 1 23:14:38 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 2 Feb 2018 10:14:38 +1100 Subject: [openstack-dev] [ptg] Dublin PTG schedule up In-Reply-To: References: Message-ID: Sweet thanks Thierry, Only issue is I see what days things are happening, but not what rooms things are in. Unless I'm failing at reading a table. Matt On Fri, Feb 2, 2018 at 8:02 AM, Thierry Carrez wrote: > Hi everyone, > > The schedule for the Dublin PTG is now posted on the PTG website: > https://www.openstack.org/ptg#tab_schedule > > I'll post on this thread if anything changes, but it's pretty unlikely > at this point. > > Note that we have a lot of available rooms on Monday/Tuesday to discuss > additional topics. If you think of something we should really take half > a day to discuss, please add it to the following etherpad: > > https://etherpad.openstack.org/p/PTG-Dublin-missing-topics > > If there is consensus it's a good topic and we agree on a time where to > fit it, we could add it to the schedule. > > For smalled things (like 90-min discussions) we can book time > dynamically during the event thanks to the new PTGbot features. > > See you there ! > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Fri Feb 2 00:12:34 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 1 Feb 2018 16:12:34 -0800 Subject: [openstack-dev] [refstack][ptl] PTL Candidacy for Rocky Message-ID: I am submitting my self nomination to serve as the RefStack PTL for the Rocky development cycle. For the Rocky cycle, I will continue to focus efforts on moving the RefStack Server and Client into maintenance mode. Outstanding tasks include: * Adding funtionality to upload subunit data for test results. * Adding Tempest autoconfiguration to the client. * Updating library dependencies. * Providing consistent API documentation. In the previous cycle, the Tempest Autoconfig project was added to RefStack governance. Another goal of the Rocky cycle is to transition project leadership to the Tempest Autoconfig team, as this project is where the majority of future work is going to happen. Thank you, Chris Hoge From rico.lin.guanyu at gmail.com Fri Feb 2 04:26:27 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 2 Feb 2018 12:26:27 +0800 Subject: [openstack-dev] [heat][ptl] PTL Candidacy for Rocky Message-ID: Hi All, I would like to nominate myself to take the role of Heat PTL for Rocky release. I'd been involved with the project for two and half years. And it's my privilege to work and learn from this great team and have the honor to serve as Pike and Queens PTL. With last haft year, team achieves following jobs: * Policy in code * Heat dashboard * Heat tempest plugin * Zuul migrate in Heat * New resources/properties * Gate stable maintenance * become Interop add-on * Deprecate/remove few resources Also done 2 blueprints, 62 bugs fixed (still going) and quite a few non-bug improvement (like memory improvement, etc.). I would like to keep trace on above jobs and with some more task that needs to be done: * Needs more reviewers and developers, we got few superman in our team (thank God for that). Still, we need more reviewers and developers than ever. * goals making and tracing. IMO, it's always a nice thing to make goals at the very first place in a cycle, so all members can jump up and pick it up if you somehow fail to keep pushing or got a more critical task to work on. And most important is we can have a way to trace and make sure our team keeps been productive(which it already is). We also need to filter and review with current community goals to make sure it's not making things worst for Heat. * Cross project co-works. We have some features out within these few releases cycle. Heat team for some reason keeps been tightly co-works with TripleO team to sync with what we have (which is super cool). What I also like to see if we can get more sync up with other teams who use heat as part of their infra which will potentially give us more feedbacks from multiple users/projects. * Inner team communications. We have faced some communication problem in this cycle, which means as a PTL, I'm responsible to make sure our team have a more comfortable workflow to work on. Which means I have to try harder to sync up tasks within this team. At least provide team better communications which shouldn't try to take more time for all. Hope you will consider me for Rocky PTL. Thank you! Rico Lin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 2 05:34:33 2018 From: gmann at ghanshyammann.com (gmann) Date: Fri, 2 Feb 2018 14:34:33 +0900 Subject: [openstack-dev] [QA][ptl] Quality Assurance PTL Candidacy for Rocky Message-ID: Hi everyone, I would like to announce my PTL candidacy of the Quality Assurance for Rocky cycle. I am glad and lucky to work in OpenStack community and would like to thank all the contributors who supported me to explore new things and ideas. You might know me as gmann on IRC. I have joined the OpenStack since 2012 and 100% in upstream development since Ice-house release. Currently i am contributing in QA, Nova & sometimes other projects too and a core member in QA projects like Tempest, Patrole etc. Along with that, I volunteered as mentor in the upstream institute and help to bring new contributor onboard. QA program always play a key role in smooth upstream development and its quality. Also help production clouds for their testing and stability. QA responsibilities are always challenging and need lot of coordination and collaboration with each projects team. Till now we have been doing good job which is not just because of QA team but it's all together a combined effort from all projects. Keep having regular ideas from different contributors to improve QA is phenomenal and I truly appreciate. Those ideas helped QA program to grow more in last couple of years. One example is extreme testing of OpenStack which is in discussion since previous summits/PTG with PoC and will add strength to QA program. other example is Patrole for RBAC testing which is really important for cloud security. My concentration as PTL will be to make good progress in extreme testing or any other new initiatives and make Patrole more stable and feature-rich so that we can see their tests running in respective projects gate. We have plugins framework approach for Tempest/Devstack/Grenade and their plugins exist across almost all the OpenStack projects, I feel that collaboration with projects team is the key role of QA team. Even though, we have been helping and fixing plugins whenever needed but I would like to improve more in this area. My objective is to make each plugin owners to use the QA services and interfaces in more better and easy way. I would like to improve the relationship and coverage of help in every possible way. Bug Triage and gate stability is the another important area for QA team. We have been doing good in bug triage since couple of years with target of 0 un-triaged bug. I would like to make sure we continue the focus in both area in next cycle too. Along with that let me summarize me the areas I am planning to focus on in Rocky Cycle. * Improvement and new Ideas in QA program as overall: - Improve the testing coverage for key features. - Improve QA process and tracking in more better way for planned deliverable. - New ideas and their progress to convert into a running software. * Collaboration and Help: - Cross community collaboration on tool, idea sharing etc, opnfv, k8s are best example as of now. - Help the other Projects' developments with test writing/improvement and gate stability - Plugin improvement and helping them on daily basis by defining doable process and goal. * Bring on more contributor and core reviewers. Following are my contribution activities: * http://stackalytics.com/?release=all&metric=marks&user_ id=ghanshyammann&project_type=all * http://stackalytics.com/?release=all&metric=commits& user_id=ghanshyammann&project_type=all Thanks for reading and consideration my candidacy. -gmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From AnNP at vn.fujitsu.com Fri Feb 2 06:12:13 2018 From: AnNP at vn.fujitsu.com (AnNP at vn.fujitsu.com) Date: Fri, 2 Feb 2018 06:12:13 +0000 Subject: [openstack-dev] [neutron][neutron-fwaas] Request for inclusion of bug fixes in RC Message-ID: <62f528a7d6b1470cb4efcad96670ae58@G07SGEXCMSGPS03.g07.fujitsu.local> Hi, I would like to request inclusion of the following patches which address bugs found in our testing. https://review.openstack.org/#/c/539461/ Addressing: https://bugs.launchpad.net/neutron/+bug/1746404 'auto_associate_default_firewall_group' got an error when new port is created We started with a CfgOpt to Disable default FWG on ports. This has caused issues with Conntrack so this option is being removed. Also on a related note, we were mistakenly applying on other ports - so tightened up the validation to ensure that it is a VM port. And https://review.openstack.org/#/c/536234/ Addressing: https://bugs.launchpad.net/neutron/+bug/1746855 FWaaS V2 failures with Ml2 is Linuxbridge or security group driver is iptables_hybrid We have failures with Linuxbridge as it is not a supported option and if SG uses iptables_hybrid driver - we have seen issues which possibly might be addressed [1], but with not enough validation we would like to prevent this scenario as well. With more testing and addressing any issues we can remove the restriction on SG with iptables_hybrid driver in the R release. [1] https://review.openstack.org/#/c/538154/ Cheers, An From skandasw at cisco.com Fri Feb 2 06:22:50 2018 From: skandasw at cisco.com (Sridar Kandaswamy (skandasw)) Date: Fri, 2 Feb 2018 06:22:50 +0000 Subject: [openstack-dev] [neutron][neutron-fwaas] Request for inclusion of bug fixes in RC In-Reply-To: <62f528a7d6b1470cb4efcad96670ae58@G07SGEXCMSGPS03.g07.fujitsu.local> References: <62f528a7d6b1470cb4efcad96670ae58@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: <5BB26A17-88EA-445D-9542-AA1E3EDCF300@cisco.com> Thanks An. The team has been working with An to review and validate these changes – we believe we are close to the final version and should be able to merge by tomorrow barring any unforeseen surprises. So pls consider adding these to the RC as they address some critical issues as outlined below. Thanks Sridar On 2/1/18, 10:12 PM, "AnNP at vn.fujitsu.com" wrote: Hi, I would like to request inclusion of the following patches which address bugs found in our testing. https://review.openstack.org/#/c/539461/ Addressing: https://bugs.launchpad.net/neutron/+bug/1746404 'auto_associate_default_firewall_group' got an error when new port is created We started with a CfgOpt to Disable default FWG on ports. This has caused issues with Conntrack so this option is being removed. Also on a related note, we were mistakenly applying on other ports - so tightened up the validation to ensure that it is a VM port. And https://review.openstack.org/#/c/536234/ Addressing: https://bugs.launchpad.net/neutron/+bug/1746855 FWaaS V2 failures with Ml2 is Linuxbridge or security group driver is iptables_hybrid We have failures with Linuxbridge as it is not a supported option and if SG uses iptables_hybrid driver - we have seen issues which possibly might be addressed [1], but with not enough validation we would like to prevent this scenario as well. With more testing and addressing any issues we can remove the restriction on SG with iptables_hybrid driver in the R release. [1] https://review.openstack.org/#/c/538154/ Cheers, An __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From wangxiyuan1007 at gmail.com Fri Feb 2 07:24:39 2018 From: wangxiyuan1007 at gmail.com (Xiyuan Wang) Date: Fri, 2 Feb 2018 15:24:39 +0800 Subject: [openstack-dev] [zaqar] Not run for PTL In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: Thanks for your hard working in Zaqar during these years. Glad to know you're still here. ;) 2018-01-23 16:10 GMT+08:00 hao wang : > Thanks Feilong, it's very great to work together with you ! > > 2018-01-23 10:56 GMT+08:00 Fei Long Wang : > > Hi team, > > > > I have been working on Zaqar for more than 4 years and serving the PTL > > for the past 5 cycles. I don't plan to run for Zaqar PTL again for the > > Rocky release. I think it's time for somebody else to lead the team for > > next milestone. It has been a great experience for me and thank you for > > all the support from the team and the whole community. I will still be > > around for sure. Thank you. > > > > -- > > Cheers & Best regards, > > Feilong Wang (王飞龙) > > ------------------------------------------------------------ > -------------- > > Senior Cloud Software Engineer > > Tel: +64-48032246 > > Email: flwang at catalyst.net.nz > > Catalyst IT Limited > > Level 6, Catalyst House, 150 Willis Street, Wellington > > ------------------------------------------------------------ > -------------- > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Fri Feb 2 08:00:09 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 2 Feb 2018 15:00:09 +0700 Subject: [openstack-dev] =?utf-8?Q?=5Btelemetry=5D=5Bheat=5D=5Bmistral=5D=5Bsdk=5D=5Bsearchlight=5D=5Bsenlin=5D=5Btacker=5D=5Btricircle=5D=5Btripleo=5D_?=Missing Queens releases In-Reply-To: <20180201193504.GA26084@sm-xps> References: <20180131210344.GA32139@sm-xps> <20180201193504.GA26084@sm-xps> Message-ID: <6bb33ffb-3bc9-4e39-874c-7ccfb6c470da@Spark> Sorry for the late reply. I just want to confirm that it’s ok from Mistral side. Thanks Renat Akhmerov @Nokia On 2 Feb 2018, 02:35 +0700, Sean McGinnis , wrote: > Just confirming and closing things out. We did not receive any negative > responses to the plan below, so a little earlier today I approved the mentioned > patch and we cut releases and branches for all libs. > > The next step if for these new versions to pass CI and get FFEs to raise the > upper constraints for them past our requirements freeze. That official request > will be coming shortly. > > Sean > > On Wed, Jan 31, 2018 at 03:03:44PM -0600, Sean McGinnis wrote: > > While reviewing Queens release deliverables and preparing missing stable/queens > > branches, we have identified several libraries that have not had any Queens > > releases. > > > > In the past, we have stated we would force a release for any missing > > deliverables in order to have a clear branching point. We considered tagging > > the base of the stable/pike branch again and starting a new stable/queens > > branch from there, but that doesn't work for several technical reasons the most > > important of which is that the queens release would not include any changes > > that had been backported to stable/pike, and we have quite a few of those. So, > > we are left with 2 choices: do not release these libraries at all for queens, > > or release from HEAD on master. Skipping the releases entirely will make it > > difficult to provide bug fixes in these libraries over the life of the queens > > release so, although it is potentially disruptive, we plan to release from HEAD > > on master. We will rely on the constraints update mechanism to protect the gate > > if the new releases introduce bugs and teams will be able to fix those problems > > on the new stable/queens branch and then release a new version. > > > > See https://review.openstack.org/#/c/539657/ and the notes below for details of > > what will be tagged. > > > > ceilometermiddleware > > -------------------- > > > > Mostly doc and CI related changes, but the "Retrieve project id to ignore from > > keystone" commit (e2bf485) looks like it may be important. > > > > Heat > > ---- > > > > heat-translator > > There are quite a few bug fixes and feature changes merged that have not been > > released. It is currently marked with a type of "library", but we will change > > this to "other" and require a release by the end of the cycle (see > > https://review.openstack.org/#/c/539655/ for that change). Based on the README > > description, this appears to be a command line and therefore should maybe have > > a type of "client-library", but "other" would work as far as release process > > goes. Since this is kind of a special command line, perhaps "other" would be > > the correct type going forward, but we will need input from the Heat team on > > that. > > > > python-heatclient > > Only reno updates, so a new release on master should not be very disruptive. > > > > tosca-parser > > Several unreleased bug fixes and feature changes. Consumed by heat-translator > > and tacker, so there is some risk in releasing it this late. > > > > > > Mistral > > ------- > > > > mistral-lib > > Mostly packaging and build changes, with a couple of fixes. It is used by > > mistral and tripleo-common. > > > > SDK > > --- > > > > requestsexceptions > > No changes this cycle. We will branch stable/queens from the same point as > > stable/pike. > > > > Searchlight > > ----------- > > > > python-searchlightclient > > Only doc and g-r changes. Since the risk here is low, we are going to release > > from master and branch from there. > > > > Senlin > > ------ > > > > python-senlinclient > > Just one bug fix. This is a dependency for heat, mistral, openstackclient, > > python-openstackclient, rally, and senlin-dashboard. The one bug fix looks > > fairly safe though, so we are going to release from master and branch from > > there. > > > > Tacker > > ------ > > > > python-tackerclient > > Many feature changes and bug fixes. This impacts mistral and tacker. > > > > Tricircle > > --------- > > > > python-tricircleclient > > One feature and several g-r changes. > > > > > > Please respond here, comment on the patch, or hit us up in #openstack-release > > if you have any questions or concerns. > > > > Thanks, > > Sean McGinnis (smcginnis) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Fri Feb 2 08:11:14 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 2 Feb 2018 08:11:14 +0000 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: References: Message-ID: So I guess my question here is why is being RESTful good? Sure it's (very, very loosely) a standard, but what are the actual advantages? Standards come and go, what we want most of all is a good quality, easy to use API. I'm not saying that going RESTful is wrong, but I don't see much discussion about what the advantages are, only about how close we are to implementing it. On 1 Feb 2018 10:55 pm, "Ed Leafe" wrote: > On Jan 18, 2018, at 4:07 AM, TommyLike Hu wrote: > > > Recently We found an issue related to our OpenStack action APIs. We > usually expose our OpenStack APIs by registering them to our API Gateway > (for instance Kong [1]), but it becomes very difficult when regarding to > action APIs. We can not register and control them seperately because them > all share the same request url which will be used as the identity in the > gateway service, not say rate limiting and other advanced gateway features, > take a look at the basic resources in OpenStack > > We discussed your email at today’s API-SIG meeting [0]. This is an area > that is always contentious in the RESTful world. Actions, tasks, and state > changes are not actual resources, and in a pure REST design they should > never be part of the URL. Instead, you should POST to the actual resource, > with the desired action in the body. So in your example: > > > URL:/volumes/{volume_id}/action > > BODY:{'extend':{}} > > the preferred way of achieving this is: > > URL: POST /volumes/{volume_id} > BODY: {‘action’: ‘extend’, ‘params’: {}} > > The handler for the POST action should inspect the body, and call the > appropriate method. > > Having said that, we realize that a lot of OpenStack services have adopted > the more RPC-like approach that you’ve outlined. So while we strongly > recommend a standard RESTful approach, if you have already released an > RPC-like API, our advice is: > > a) avoid having every possible verb in the URL. In other words, don’t use: > /volumes/{volume_id}/mount > /volumes/{volume_id}/umount > /volumes/{volume_id}/extend > This moves you further into RPC-land, and will make updating your API to a > more RESTful design more difficult. > > b) choose a standard term for the item in the URL. In other words, always > use ‘action’ or ‘task’ or whatever else you have adopted. Don’t mix > terminology. Then pass the action to perform, along with any parameters in > the body. This will make it easier to transition to a RESTful design by > later updating the handlers to first inspect the BODY instead of relying > upon the URL to determine what action to perform. > > You might also want to contact the Kong developers to see if there is a > way to work with a RESTful API design. > > -- Ed Leafe > > [0] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_ > sig.2018-02-01-16.02.log.html#l-28 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Fri Feb 2 08:41:39 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 2 Feb 2018 09:41:39 +0100 Subject: [openstack-dev] [tripleo] TripleO CI end of sprint status Message-ID: Hello, On January 31 we came the end of sprint using our new team structure, and here’s the highlights. Sprint Review: On this sprint, the team worked on internal infrastructure. One can see the results of the sprint via https://tinyurl.com/ycpw42pj Ruck and Rover What is Ruck and Rover One person in our team is designated Ruck and another Rover, one is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. The other person, is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1] List of bugs that Ruck and Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1744151 - barbican_tempest_plugin.tests.scenario.test_volume_encryption.VolumeEncryptionTest fails on Invalid Volume - https://bugs.launchpad.net/tripleo/+bug/1745712 - master: etc/pki/tls/certs/undercloud-192.168.24.2.pem]: Failed to generate additional resources using 'eval_generate': comparison of Array with Array failed - https://bugs.launchpad.net/tripleo/+bug/1746023 - rdo phase 2 status to dlrn_api fails with file not found - https://bugs.launchpad.net/tripleo/+bug/1746026 - Tracker, CI: OVB jobs on RDO cloud can't get OVB env because of 504 gateway timeout - https://bugs.launchpad.net/tripleo/+bug/1746281 - tripleo jobs running in rax have slow package downloads, jobs timing out - https://bugs.launchpad.net/tripleo/pike/+bug/1745686 - [gnocchi-db-sync]: gnocchi-upgrade --config-file /etc/gnocchi/gnocchi.conf --skip-storage --skip-incoming returned 2 - https://bugs.launchpad.net/tripleo/+bug/1746729 - tracker, rdo sf nodepool slaves going off line - https://bugs.launchpad.net/tripleo/+bug/1746737 - "msg": "No package matching 'jq' found available, installed or updated" - https://bugs.launchpad.net/tripleo/+bug/1746734 - Periodic Jobs failing at tempest config while creating image(with swift backend) We also have our new Ruck and Rover for this week: - Ruck - Rafael Folco - rfolco|ruck - Rover - Sagi Shnaidman - sshnaidm|rover If you have any questions and/or suggestions, please contact us [1] https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Feb 2 08:52:50 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Feb 2018 09:52:50 +0100 Subject: [openstack-dev] [ptg] Dublin PTG schedule up In-Reply-To: References: Message-ID: <138054f0-d461-bdfe-b830-3ca76dbe7707@openstack.org> Matthew Oliver wrote: > Sweet thanks Thierry, > > Only issue is I see what days things are happening, but not what rooms > things are in. Unless I'm failing at reading a table. Yes, that's by design. We publish the days early so that people can fine-tune their travel plans and start organizing. We are still finalizing the exact room assignments, though. Those will be ready by the time the event starts :) -- Thierry From matt at oliver.net.au Fri Feb 2 09:14:22 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 2 Feb 2018 20:14:22 +1100 Subject: [openstack-dev] [ptg] Dublin PTG schedule up In-Reply-To: <138054f0-d461-bdfe-b830-3ca76dbe7707@openstack.org> References: <138054f0-d461-bdfe-b830-3ca76dbe7707@openstack.org> Message-ID: Ahh makes sense. Thanks Thierry, I can tell this isn't your first Rodeo :) Matt On Fri, Feb 2, 2018 at 7:52 PM, Thierry Carrez wrote: > Matthew Oliver wrote: > > Sweet thanks Thierry, > > > > Only issue is I see what days things are happening, but not what rooms > > things are in. Unless I'm failing at reading a table. > > Yes, that's by design. We publish the days early so that people can > fine-tune their travel plans and start organizing. We are still > finalizing the exact room assignments, though. Those will be ready by > the time the event starts :) > > -- > Thierry > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apolloliuhx at gmail.com Fri Feb 2 10:08:20 2018 From: apolloliuhx at gmail.com (Hanxi Liu) Date: Fri, 2 Feb 2018 18:08:20 +0800 Subject: [openstack-dev] [telemetry][heat][mistral][sdk][searchlight][senlin][tacker][tricircle][tripleo] Missing Queens releases In-Reply-To: <20180131210344.GA32139@sm-xps> References: <20180131210344.GA32139@sm-xps> Message-ID: On Thu, Feb 1, 2018 at 5:03 AM, Sean McGinnis wrote: > While reviewing Queens release deliverables and preparing missing > stable/queens > branches, we have identified several libraries that have not had any Queens > releases. > > In the past, we have stated we would force a release for any missing > deliverables in order to have a clear branching point. We considered > tagging > the base of the stable/pike branch again and starting a new stable/queens > branch from there, but that doesn't work for several technical reasons the > most > important of which is that the queens release would not include any changes > that had been backported to stable/pike, and we have quite a few of those. > So, > we are left with 2 choices: do not release these libraries at all for > queens, > or release from HEAD on master. Skipping the releases entirely will make it > difficult to provide bug fixes in these libraries over the life of the > queens > release so, although it is potentially disruptive, we plan to release from > HEAD > on master. We will rely on the constraints update mechanism to protect the > gate > if the new releases introduce bugs and teams will be able to fix those > problems > on the new stable/queens branch and then release a new version. > > See https://review.openstack.org/#/c/539657/ and the notes below for > details of > what will be tagged. > > ceilometermiddleware > -------------------- > > Mostly doc and CI related changes, but the "Retrieve project id to ignore > from > keystone" commit (e2bf485) looks like it may be important. > > Thanks Sean and release team! It's ok on ceilometermiddleware from Telemetry side. Hanxi Liu (IRC: lhx_) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Fri Feb 2 10:20:21 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 2 Feb 2018 15:50:21 +0530 Subject: [openstack-dev] [qa] Office Hours Report 2018-02-01 Message-ID: Hello, Thanks everyone for attending QA office hour. Since It's the starting of the year so attendance is low. We managed to triaged some bugs opened/changed in last 14 days. The IRC report [0] and full log [1] are available through meetbot. bug 1745871 in Patrole "RBAC tests for group type specs" https://bugs.launchpad.net/patrole/+bug/1745871 Status: confirmed Related Review: https://review.openstack.org/#/c/525589/ 1743688 in congress "Tempest unable to detect service availability properly, causing congress tests to fail" https://bugs.launchpad.net/devstack/+bug/1743688 Status: In Progress bug 1744096 in devstack "CentOS install fails with 'python3: command not found'" https://bugs.launchpad.net/devstack/+bug/1744096 status: In Progress bug 1746687 in tempest "tempest plugins should be loaded though configuration" https://bugs.launchpad.net/tempest/+bug/1746687 Status: Invalid Above bug leads to interesting discussion on how to ship tempest plugins with kolla tempest containers Below are the remarks: * Bundle all the plugins in a single containers. * User service_available config params to enable or disable a plugin while using it * In tempest we have blacklist or whitelist test params to play with tests. bug 1745322 in tempest "stackviz folder does not show up in logs on zuulv3 native jobs" https://bugs.launchpad.net/tempest/+bug/1745322 Status: Fix committed Review: https://review.openstack.org/#/c/539146/ bug 1745307 in tempest "Group related cases would fail if driver has group spec check" https://bugs.launchpad.net/tempest/+bug/1745307 Status: in-progress Review: https://review.openstack.org/537784 https://bugs.launchpad.net/tempest/+bug/1660612 bug 1660612 in neutron "Tempest full jobs time out on execution" status: New Comments: related fix https://review.openstack.org/#/c/536598/ is not ok. it changes existing tox envs which has an impact on existing jobs it make it all scenario to run parallel, but tempest full deos not run scenario tests. Links: [0]. http://eavesdrop.openstack.org/meetings/qa_office_hour/2018/qa_office_hour.2018-02-01-09.05.txt [1]. http://eavesdrop.openstack.org/meetings/qa_office_hour/2018/qa_office_hour.2018-02-01-09.05.log.html Thanks for reading. Thanks, Chandan Kumar From thierry at openstack.org Fri Feb 2 10:52:39 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Feb 2018 11:52:39 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, February 2nd Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * New project team: Qinling (Function as a Service) [1] * Goal updates: ironic [1] https://review.openstack.org/#/c/533827/ This has been a busy week for our infrastructure team, as we digested the surge in activity linked to the Queens Feature Freeze (and related release/branching requests). Gate levels look like they are back to normal levels now... We added a new project team to the OpenStack family this week. Qinling, which allows to provide function-as-a-service on top of an OpenStack cloud, was just approved. It will be officially included in the next OpenStack release cycle, Rocky. == PTG preparation == Dublin is only 24 days from now, and we should prepare a bit to make the most of this event. The Project Teams Gathering is a productivity booster: getting together for face-to-face time to get quick agreement on complex questions and making quick progress on critical work. We have a lot of tracks, some around a specific project team, some around a specific SIG, and some around a specific area of discussion. You can find the track layout at: https://www.openstack.org/ptg#tab_schedule We have extra room on Monday and Tuesday for missing (or last-minute) areas of discussion. If you can think of something we should really be discussing, please add your thoughts to: https://etherpad.openstack.org/p/PTG-Dublin-missing-topics Track leads have set up a number of etherpads to openly brainstorm what to discuss. You can find those (or link to missing ones) here: https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads New for this PTG, we'll have post-lunch presentations. Ideas and a strawman programme are proposed here: https://etherpad.openstack.org/p/dublin-PTG-postlunch If you haven't registered yet, please note that the event is very likely to sell out. I advise you to not wait too much before getting your ticket: https://rockyptg.eventbrite.com == Rocky goals == We have been making progress on selecting a set of goals for the Rocky cycle. As a reminder, here are the currently-proposed goals: * Storyboard Migration [2] (diablo_rojo) * Remove mox [3] (chandankumar) * Ensure pagination links [4] (mordred) * Add Cold upgrades capabilities [5] (masayuki) * Enable toggling DEBUG option at runtime [6] (gcb) [2] https://review.openstack.org/513875 [3] https://review.openstack.org/532361 [4] https://review.openstack.org/532627 [5] https://review.openstack.org/#/c/533544/ [6] https://review.openstack.org/534605 In discussions this week the following set was proposed: mox reduction and DEBUG runtime toggling. It represents a good mix of ops-facing improvement and dev-facing tech debt reduction. Please chime in on the threads and reviews if you think this would not be a reasonable set. == Under discussion == A new project team was proposed to regroup people working on PowerVM support in OpenStack. It is similar in many ways to the WinStackers team (working on Hyper-V / Windows support). Please comment on the review at: https://review.openstack.org/#/c/540165/ The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, now on an active mailing-list thread. Please chime in to inform the TC choice: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html == TC member actions for the coming week(s) == We need to finalize the set of Rocky goals, ahead of the PTG, so that the champions for the selected goals can start planning their PTG activities. Thanks to pabelanger for volunteering to drive the S release naming process. He needs to propose a release_naming.rst change proposing dates and geographic area for the name choices. Finally we need to start brainstorming the contents of the Monday post-lunch presentation at the PTG (the "welcome to the PTG" presentation which will be used to give situation awareness to attendees). == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to still be focused around Rocky goal selection and PTG prep. Cheers, -- Thierry Carrez (ttx) From eumel at arcor.de Fri Feb 2 11:26:56 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 02 Feb 2018 12:26:56 +0100 Subject: [openstack-dev] [release] Release countdown for week R-3, February 3 - 9 In-Reply-To: <20180201224210.GA9570@sm-xps> References: <20180201224210.GA9570@sm-xps> Message-ID: Am 2018-02-01 23:42, schrieb Sean McGinnis: > We are already starting on RC week. Time flies when you're having fun. > > Development Focus > ----------------- > > The Release Candidate (RC) deadline is this Thursday, the 8th. Work > should be > focused on any release-critical bugs and wrapping up and remaining > feature > work. > > General Information > ------------------- > > All cycle-with-milestones and cycle-with-intermediary projects should > cut their > stable/queens branch by the end of this week. This branch will track > the Queens > release. > > Once stable/queens has been created, master will will be ready to > switch to > Rocky development. While master will no longer be frozen, please > prioritize any > work necessary for completing Queens plans. [...] Thx, Sean. If your project contains translation stuff like Horizon Dashboard Plugins, please send me a note when you switch to stable/queens branch, so we can prepare translation platform in time with the new branch. thx Frank (eumel8) From hjensas at redhat.com Fri Feb 2 11:28:51 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Fri, 02 Feb 2018 12:28:51 +0100 Subject: [openstack-dev] [tripleo] FFE - Feuture Freeze Exception request for Routed Spine and Leaf Deployment Message-ID: <1517570931.6277.15.camel@redhat.com> Requesting:  Feuture Freeze Exception request for Routed Spine and Leaf Deployment Blueprints: https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- ironic-inspector https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- deployment All external dependencies for Routed Spine and Leaf Deployement have finally landed. (Except puppet module changes.) Pros ==== This delivers a feature that has been requested since the Kilo release. It makes TripleO more viable in large deployments as well as in edge use cases where openstack services are not deployed in one datacenter. The core piece in this is the neutron segments service_plugin. This has been around since newton. Most of the instack-undercloud patches were first proposed during ocata. The major change is in the undercloud. In tripleo-heat-templates we need just a small change to ensure we get ip addresses allocated from neutron when segments service plug-in is enabled in neutron. The overcloud configuration stays the same, we already do have users deploying routed networks in the isolated networks using composable networks so we know it works. Risks ===== I see little risk introducing a regression to current functionality with these changes. The major part of the undercloud patches has been around for a long time and passing CI. The format of undercloud.conf is changed, options are deprecated and new options are added to enable multiple control plane subnets/l2- segments to be defined. All options are properly deprectated, so using a configuration file from pike will still work. ===================================== The list of patches that need to land ===================================== instack-undercloud ------------------ * Tripleo routed networks ironic inspector, and Undercloud https://review.openstack.org/#/c/437544/ * Move ctlplane network/subnet setup to python https://review.openstack.org/533364 * Update config to use per network groups https://review.openstack.org/533365 * Update validations to validate all subnets https://review.openstack.org/533366 * Add support for multiple inspection subnets https://review.openstack.org/533367 * Create static routes for remote subnets https://review.openstack.org/533368 * Add per subnet network cidr nat rules https://review.openstack.org/533369 * Add per subnet masquerading https://review.openstack.org/533370 * Install and enable neutron baremetal mech plugin https://review.openstack.org/537830 tripleo-heat-templates ---------------------- * Add subnet property to ctlplane network for server resources https://review.openstack.org/473817  tripleo-docs ------------ * Documentation - TripleO routed-spine-and-leaf https://review.openstack.org/#/c/539939/  puppet-neutron -------------- * Add networking-baremetal ml2 plug-in https://review.openstack.org/537826  * Add networking-baremetal - ironic-neutron-agent https://review.openstack.org/539405 -- |Harald Jensås        |  hjensas:irc From cdent+os at anticdent.org Fri Feb 2 12:33:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 2 Feb 2018 12:33:24 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-05 Message-ID: Here's resource provider and placement update 18-05. 18-04 was skipped on account of illness. # Most Important Feature freeze has come and gone, RC1 is next week. This means that finding bugs and, where relevant, reporting them with a tag of 'queens-rc-potential' is top priority. The PTG is coming up at the end of this month. If you have topics for discussion that are not already on the etherpad add them: https://etherpad.openstack.org/p/nova-ptg-rocky I wrote a blog post to gather some thinking (and links) about preparing to extract placement from nova (or at least ease the path when it does eventually happen): https://anticdent.org/placement-extraction.html It's probably time to start writing specs for some of the things we know will be a big deal with placement in Rocky. Eric has started with a spec that covers the ProviderTree work. Much of that work is already done, but never had a spec in the first place: https://review.openstack.org/#/c/540111/ I'm on the hook to create a spec for enabling generation handling when associating aggregates. If there are others, getting them started before the PTG can help to make the time at the PTG more effective. # What's Changed A limit is now passed to /allocation_candidates to ensure that we don't cause out of memory errors in big empty clouds. Traits expressed as 'required' in flavor extra specs are passed in requests to placement and /allocation_candidates accepts the the required parameter. More, but not yet all, requests from nova to placement include the global request id. Some, but not all, of the ProviderTree functionality has merged. The full stack of Alternate Hosts is now merged. The ironic driver now manages traits. At least some support for VGPU merged. Not clear what this means for end users. # Help Wanted Testing, Testing, Testing. There are a fair few unstarted bugs related to placement that could do with some attention. Here's a handy URL: https://goo.gl/TgiPXb # Main Themes We've not yet identified the new themes, other than to know that Nested remains a big deal. ## Nested Resource Providers The work to get nested providers represented in the /allocation_candidates did not complete before feature freeze. It remains in progresss at https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers There's been a lot of discussion in IRC about the sometimes differing goals on how people want NRP to work. One example is at: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-29.log.html#t2018-01-29T15:01:24 There's an email thread related to that discussion: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126651.html I think we'll be doing ourselves a favor if we can work to satisfy concrete use cases and then generalize from that. The related provider tree work is now under its own topic: https://review.openstack.org/#/q/topic:bp/update-provider-tree # Other Plenty of these are bugs or fairly trivial and/or non-feature fixes. * doc: mark the max microversions for queens https://review.openstack.org/#/c/539978/ * [Placement] Invalid query parameter could lead to HTTP 500 https://review.openstack.org/#/c/539408/ * [placement] use simple FaultWrapper https://review.openstack.org/#/c/533752/ * Ensure resource classes correctly https://review.openstack.org/#/c/539738/ * Avoid inventory DELETE API (no conflict detection) https://review.openstack.org/#/c/539712/ * Do not normalize allocation ratios https://review.openstack.org/#/c/532924/ * Sending global request ids from nova to placement https://review.openstack.org/#/q/topic:bug/1734625 * VGPU suppport https://review.openstack.org/#/q/topic:bp/add-support-for-vgpu * Update resources once in update available resources https://review.openstack.org/#/c/520024/ (This ought, when it works, to help address some performance concerns with nova making too many requests to placement) * spec: treat devices as generic resources https://review.openstack.org/#/c/497978/ This is a WIP and will need to move to Rocky * Support aggregate affinity filters/weighers https://review.openstack.org/#/q/topic:bp/aggregate-affinity A rocky targeted improvement to affinity handling * Move placement body samples in docs to own dir https://review.openstack.org/#/c/529998/ * Improved functional test coverage for placement https://review.openstack.org/#/q/topic:bp/placement-test-enhancement * Functional tests for traits api https://review.openstack.org/#/c/524094/ * annotate loadapp() (for placement wsgi app) as public https://review.openstack.org/#/c/526691/ * Remove microversion fallback code from report client https://review.openstack.org/#/c/528794/ * WIP: SchedulerReportClient.set_aggregates_for_provider https://review.openstack.org/#/c/532995/ This is likely for rocky as it depends on changing the api for aggregates handling on the placement side to accept and provide a generation * Add functional test for two-cell scheduler behaviors https://review.openstack.org/#/c/452006/ (This is old and maybe out of date, but something we might like to resurrect) * Make API history doc consistent https://review.openstack.org/#/c/477478/ * WIP: General policy sample file for placement https://review.openstack.org/#/c/524425/ * Support relay RP for allocation candidates https://review.openstack.org/#/c/533437/ Bug fix for sharing with multiple providers * Convert driver supported capabilities to compute node provider traits https://review.openstack.org/#/c/538498/ # End Usual caveats about missing things apply. One thing I'm curious to know is: From an end user's perspective what does the queen's placement work get ya? -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From ccamacho at redhat.com Fri Feb 2 13:39:46 2018 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Fri, 2 Feb 2018 14:39:46 +0100 Subject: [openstack-dev] [tripleo] [nova] Cleaning databases before running a major upgrade or for maintenance purposes. Message-ID: Hi! I'll like to raise a question related to "How to clean Nova databases before running upgrades" We want to check that Nova databases (nova, nova_api, nova_cell0, nova_placement) are in good shape before running upgrade, we know that there is some effort to "purge the deleted instances" but no patches yet for that. In general, how can I know we have cleaned all Nova DBs? This can be i.e. for maintenance purposes or for checking it before the major upgrade. Currently the command we run is: nova-manage db archive_deleted_rows --max_rows 100 --until-complete Still, the archived rows are stored in shadow tables so we still have a big DB, and there is some specs[1] to remove them but nothing landed/usable. [1]: https://blueprints.launchpad.net/nova/+spec/purge-deleted-instances-cmd Thanks, Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Fri Feb 2 14:48:41 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 2 Feb 2018 14:48:41 +0000 Subject: [openstack-dev] [monasca] Contribution from T-Systems Message-ID: Hi Yuan, Thanks for reaching out to us in the team meeting. I'm happy to hear you want to contribute your changes to the project. You can find our contribution guidelines here [1]. The general OpenStack Developer's Guide is here [2]. Which components/topics are you interested in, in terms of contribution? Also feature requests or improvement proposals are very welcome. As you have probably noticed we will be planning the work for the next release during the Project Teams Gathering in Dublin end of this month [3, 4]. It's a perfect opportunity to engage in the project. Best regards Witek [1] https://docs.openstack.org/monasca-api/latest/contributor/index.html [2] https://docs.openstack.org/infra/manual/developers.html [3] https://www.openstack.org/ptg [4] https://etherpad.openstack.org/p/monasca-ptg-rocky From aspiers at suse.com Fri Feb 2 15:52:24 2018 From: aspiers at suse.com (Adam Spiers) Date: Fri, 2 Feb 2018 15:52:24 +0000 Subject: [openstack-dev] Remembering Shawn Pearce (fwd) Message-ID: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> Dear Stackers, Since git and Gerrit are at the heart of our development process, I am passing on this very sad news from the git / Gerrit communities that Shawn Pearce has passed away after an aggressive lung cancer. Shawn was founder of Gerrit / JGit / libgit2 / git-gui, and the third most prolific contributor to git itself. https://gitenterprise.me/2018/01/30/shawn-pearce-a-true-leader/ https://sfconservancy.org/blog/2018/jan/30/shawn-pearce/ https://twitter.com/cdibona/status/957822400518696960 https://public-inbox.org/git/CAP8UFD0aKqT5YXJx9-MqeKCKhOVGxninRf8tv30=hKgVmHgmQQ at mail.gmail.com/T/#mf5c158c68565c1c68c80b6543966ef2cad6d151c https://groups.google.com/forum/#!topic/repo-discuss/B4P7G1YirdM/discussion He is survived by his wife and two young sons. A memorial fund has been set up in aid of the boys' education and future: https://gitenterprise.me/2018/01/30/gerrithub-io-donations-to-shawns-family/ Thank you Shawn for enriching our lives with your great contributions to the FLOSS community. ----- Forwarded message from Adam Spiers ----- Date: Fri, 2 Feb 2018 15:12:35 +0000 From: Adam Spiers To: Luca Milanesio Subject: Re: Fwd: Remembering Shawn Pearce Hi Luca, that's such sad news :-( What an incredible contribution Shawn made to the community. In addition to Gerrit, I use git-gui and gitk regularly, and also my git-deps utility is based on libgit2. I had no idea he wrote them all, and many other things. I will certainly donate and also ensure that the OpenStack community is aware of the memorial fund. Thanks a lot for letting me know! Luca Milanesio wrote: > Hi Adam, > you probably have received this very sad news :-( > As GerritForge we are actively supporting, contributing and promoting the donations to Shawn's Memorial Fund (https://www.gofundme.com/shawn-pearce-memorial-fund) and added a donation button to GerritHub.io . > > Feel free to spread the sad news to the OpenStack community you are in touch with. > --- > Luca Milanesio > GerritForge > 3rd Fl. 207 Regent Street > London W1B 3HH - UK > http://www.gerritforge.com > > Luca at gerritforge.com > Tel: +44 (0)20 3292 0677 > Mob: +44 (0)792 861 7383 > Skype: lucamilanesio > http://www.linkedin.com/in/lucamilanesio > > > Begin forwarded message: > > > > From: "'Dave Borowitz' via Repo and Gerrit Discussion" > > Subject: Remembering Shawn Pearce > > Date: 29 January 2018 at 15:15:05 GMT > > To: repo-discuss > > Reply-To: Dave Borowitz > > > > Dear Gerrit community, > > > > I am very saddened to report that Shawn Pearce, long-time Git contributor and founder of the Gerrit Code Review project, passed away over the weekend after being diagnosed with lung cancer last year. He spent his final days comfortably in his home, surrounded by family, friends, and colleagues. > > > > Shawn was an exceptional software engineer and it is impossible to overstate his contributions to the Git ecosystem. He had everything from the driving high-level vision to the coding skills to solve any complex problem and bring his vision to reality. If you had the pleasure of collaborating with him on code reviews, as I know many of you did, you've seen first-hand his dedication and commitment to quality. You can read more about his contributions in this recent interview . > > > > In addition to his technical contributions, Shawn truly loved the open-source communities he was a part of, and the Gerrit community in particular. Growing the Gerrit project from nothing to a global community with hundreds of contributors used by some of the world's most prominent tech companies is something he was extremely proud of. > > > > Please join me in remembering Shawn Pearce and continuing his legacy. Feel free to use this thread to share your memories with the community Shawn loved. > > > > If you are interested, his family has set up GoFundMe page to put towards his children's future. > > > > Best wishes, > > Dave Borowitz > > > > > > -- > > -- > > To unsubscribe, email repo-discuss+unsubscribe at googlegroups.com > > More info at http://groups.google.com/group/repo-discuss?hl=en > > > > --- > > You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss+unsubscribe at googlegroups.com . > > For more options, visit https://groups.google.com/d/optout . > ----- End forwarded message ----- From s at cassiba.com Fri Feb 2 15:54:23 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 2 Feb 2018 07:54:23 -0800 Subject: [openstack-dev] [chef][ptl] PTL candidacy for Rocky Message-ID: Ohai! I am seeking to continue as PTL for Chef OpenStack, also known as openstack-chef. The tl;dr of my candidacy, which can be read at https://review.openstack.org/539211 would be: - The cookbooks are getting better code-wise, but we're not in a good place people-wise to facilitate handing over the reins just yet. - CI and pipelines are a focus of this cycle, to aid in delivering code changes and project visibility. - For a codebase as complex as openstack-chef, to keep it out of irrelevance, the barrier to delivering change must be lowered immensely. In the last cycle, in addition to delivering Chef 13 support to the cookbooks (2+ years worth of deprecations!), I successfully negotiated a delicate, downright awkward, trademark issue on behalf of OpenStack. The outcome of this was to further increase the visibility of OpenStack's output in the open source community. The openstack-chef community also introduced Test Kitchen and InSpec support to the cookbooks, which enables us to further close the gap between CI and local testing. As always, openstack-chef need more reviewers and developers, but testers especially. Without a consistent feedback loop, the codebase starts to exist in a quasi-vacuum. As our pace typically keeps us a release behind, the loop doesn't really close until the "self-LTS" deployers of OpenStack look to the next release. Without someone to keep things moving forward, progress stagnates, and, eventually, even the stalwarts look elsewhere for an upstream. Thank you for reading. Delightfully, Samuel Cassiba From mordred at inaugust.com Fri Feb 2 16:44:44 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 2 Feb 2018 10:44:44 -0600 Subject: [openstack-dev] Remembering Shawn Pearce (fwd) In-Reply-To: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> References: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> Message-ID: <39f79ccf-689e-e97c-0a95-915cfd7c5af1@inaugust.com> On 02/02/2018 09:52 AM, Adam Spiers wrote: > Dear Stackers, > > Since git and Gerrit are at the heart of our development process, I am > passing on this very sad news from the git / Gerrit communities that > Shawn Pearce has passed away after an aggressive lung cancer. > > Shawn was founder of Gerrit / JGit / libgit2 / git-gui, and the third > most prolific contributor to git itself. > >    https://gitenterprise.me/2018/01/30/shawn-pearce-a-true-leader/ >    https://sfconservancy.org/blog/2018/jan/30/shawn-pearce/ >    https://twitter.com/cdibona/status/957822400518696960 > > https://public-inbox.org/git/CAP8UFD0aKqT5YXJx9-MqeKCKhOVGxninRf8tv30=hKgVmHgmQQ at mail.gmail.com/T/#mf5c158c68565c1c68c80b6543966ef2cad6d151c > > > https://groups.google.com/forum/#!topic/repo-discuss/B4P7G1YirdM/discussion > > He is survived by his wife and two young sons.  A memorial fund has > been set up in aid of the boys' education and future: > > > https://gitenterprise.me/2018/01/30/gerrithub-io-donations-to-shawns-family/ > > > Thank you Shawn for enriching our lives with your great contributions > to the FLOSS community. ++ OpenStack would not be where it is today without Shawn's work. > ----- Forwarded message from Adam Spiers ----- > > Date: Fri, 2 Feb 2018 15:12:35 +0000 > From: Adam Spiers > To: Luca Milanesio > Subject: Re: Fwd: Remembering Shawn Pearce > > Hi Luca, that's such sad news :-(  What an incredible contribution > Shawn made to the community.  In addition to Gerrit, I use git-gui and > gitk regularly, and also my git-deps utility is based on libgit2.  I > had no idea he wrote them all, and many other things. > > I will certainly donate and also ensure that the OpenStack community > is aware of the memorial fund.  Thanks a lot for letting me know! > > Luca Milanesio wrote: >> Hi Adam, >> you probably have received this very sad news :-( >> As GerritForge we are actively supporting, contributing and promoting >> the donations to Shawn's Memorial Fund >> (https://www.gofundme.com/shawn-pearce-memorial-fund) and added a >> donation button to GerritHub.io . >> >> Feel free to spread the sad news to the OpenStack community you are in >> touch with. >> --- >> Luca Milanesio >> GerritForge >> 3rd Fl. 207 Regent Street >> London W1B 3HH - UK >> http://www.gerritforge.com >> >> Luca at gerritforge.com >> Tel:  +44 (0)20 3292 0677 >> Mob: +44 (0)792 861 7383 >> Skype: lucamilanesio >> http://www.linkedin.com/in/lucamilanesio >> >> >> > Begin forwarded message: >> > > From: "'Dave Borowitz' via Repo and Gerrit Discussion" >> >> > Subject: Remembering Shawn Pearce >> > Date: 29 January 2018 at 15:15:05 GMT >> > To: repo-discuss >> > Reply-To: Dave Borowitz >> > > Dear Gerrit community, >> > > I am very saddened to report that Shawn Pearce, long-time Git >> contributor and founder of the Gerrit Code Review project, passed away >> over the weekend after being diagnosed with lung cancer last year. He >> spent his final days comfortably in his home, surrounded by family, >> friends, and colleagues. >> > > Shawn was an exceptional software engineer and it is impossible to >> overstate his contributions to the Git ecosystem. He had everything >> from the driving high-level vision to the coding skills to solve any >> complex problem and bring his vision to reality. If you had the >> pleasure of collaborating with him on code reviews, as I know many of >> you did, you've seen first-hand his dedication and commitment to >> quality. You can read more about his contributions in this recent >> interview >> . >> >> > > In addition to his technical contributions, Shawn truly loved the >> open-source communities he was a part of, and the Gerrit community in >> particular. Growing the Gerrit project from nothing to a global >> community with hundreds of contributors used by some of the world's >> most prominent tech companies is something he was extremely proud of. >> > > Please join me in remembering Shawn Pearce and continuing his >> legacy. Feel free to use this thread to share your memories with the >> community Shawn loved. >> > > If you are interested, his family has set up GoFundMe page >> to put towards >> his children's future. >> > > Best wishes, >> > Dave Borowitz >> > > > -- >> > -- >> > To unsubscribe, email repo-discuss+unsubscribe at googlegroups.com >> > More info at http://groups.google.com/group/repo-discuss?hl=en >> >> > > --- >> > You received this message because you are subscribed to the Google >> Groups "Repo and Gerrit Discussion" group. >> > To unsubscribe from this group and stop receiving emails from it, >> send an email to repo-discuss+unsubscribe at googlegroups.com >> . >> > For more options, visit https://groups.google.com/d/optout >> . >> > > ----- End forwarded message ----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Fri Feb 2 16:53:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Feb 2018 17:53:34 +0100 Subject: [openstack-dev] Remembering Shawn Pearce (fwd) In-Reply-To: <39f79ccf-689e-e97c-0a95-915cfd7c5af1@inaugust.com> References: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> <39f79ccf-689e-e97c-0a95-915cfd7c5af1@inaugust.com> Message-ID: <619fce9a-6958-e3fb-6386-62f933c16278@openstack.org> Monty Taylor wrote: > On 02/02/2018 09:52 AM, Adam Spiers wrote: >> Dear Stackers, >> >> Since git and Gerrit are at the heart of our development process, I am >> passing on this very sad news from the git / Gerrit communities that >> Shawn Pearce has passed away after an aggressive lung cancer. >> >> Shawn was founder of Gerrit / JGit / libgit2 / git-gui, and the third >> most prolific contributor to git itself. >> >>     https://gitenterprise.me/2018/01/30/shawn-pearce-a-true-leader/ >>     https://sfconservancy.org/blog/2018/jan/30/shawn-pearce/ >>     https://twitter.com/cdibona/status/957822400518696960 >>     >> https://public-inbox.org/git/CAP8UFD0aKqT5YXJx9-MqeKCKhOVGxninRf8tv30=hKgVmHgmQQ at mail.gmail.com/T/#mf5c158c68565c1c68c80b6543966ef2cad6d151c >> >>     >> https://groups.google.com/forum/#!topic/repo-discuss/B4P7G1YirdM/discussion >> >> >> He is survived by his wife and two young sons.  A memorial fund has >> been set up in aid of the boys' education and future: >> >> https://gitenterprise.me/2018/01/30/gerrithub-io-donations-to-shawns-family/ >> >> >> Thank you Shawn for enriching our lives with your great contributions >> to the FLOSS community. > > ++ > > OpenStack would not be where it is today without Shawn's work. Indeed. In open source in general, and in OpenStack in particular, we stand on the shoulders of a multitude of giants. Shawn will be missed. -- Thierry Carrez (ttx) From zbitter at redhat.com Fri Feb 2 17:10:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 2 Feb 2018 12:10:58 -0500 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: Message-ID: <50773bcf-ef48-c92c-4ebc-ef69cb658eb0@redhat.com> On 30/01/18 10:33, Colleen Murphy wrote: > At the last PTG we had some time on Monday and Tuesday for > cross-project discussions related to baremetal and VM management. We > don't currently have that on the schedule for this PTG. There is still > some free time available that we can ask for[1]. Should we try to > schedule some time for this? +1, I would definitely attend this too. - ZB > From a keystone perspective, some things we'd like to talk about with > the BM/VM teams are: > > - Unified limits[2]: we now have a basic REST API for registering > limits in keystone. Next steps are building out libraries that can > consume this API and calculate quota usage and limit allocation, and > developing models for quotas in project hierarchies. Input from other > projects is essential here. > - RBAC: we've introduced "system scope"[3] to fix the admin-ness > problem, and we'd like to guide other projects through the migration. > - Application credentials[4]: this main part of this work is largely > done, next steps are implementing better access control for it, which > is largely just a keystone team problem but we could also use this > time for feedback on the implementation so far > > There's likely some non-keystone-related things that might be at home > in a dedicated BM/VM room too. Do we want to have a dedicated day or > two for these projects? Or perhaps not dedicated days, but > planned-in-advance meeting time? Or should we wait and schedule it > ad-hoc if we feel like we need it? > > Colleen > > [1] https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true > [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html > [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Fri Feb 2 17:22:55 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Feb 2018 18:22:55 +0100 Subject: [openstack-dev] [ptg] Dublin PTG schedule up In-Reply-To: References: Message-ID: Thierry Carrez wrote: > The schedule for the Dublin PTG is now posted on the PTG website: > https://www.openstack.org/ptg#tab_schedule > > I'll post on this thread if anything changes, but it's pretty unlikely > at this point. Heads-up: the OpenStack-Helm and Watcher teams agreed to trade their time slots. That's reflected on the published schedule now. Cheers, -- Thierry Carrez (ttx) From rmascena at redhat.com Fri Feb 2 17:34:12 2018 From: rmascena at redhat.com (Raildo Mascena de Sousa Filho) Date: Fri, 02 Feb 2018 17:34:12 +0000 Subject: [openstack-dev] [oslo.config][castellan][tripleo][ptg]Protecting plain text secrets in configuration files Message-ID: Hello folks, Various regulations and best practices say that passwords and other secret values should not be stored in plain text in configuration files. There are “secret store” services to manage values that should be kept secure. Castellan provides an abstraction API for accessing those services. [1] In this manner, several different management services can be supported through a single interface. Then, we will be able to use a Castellan reference for those secrets and store it using a proper key store backend, currently Castellan supports Barbican and Vault as a backend, so for this case, we should use a more light solution, such as Custodia[2], which work as Secrets-as-a-Service API, working as a lightweight solution compared with Barbican, besides that, Custodia have some good features like overlayed encryption backend that can be used to store that secret. Currently, We have that olso.config interface for pluggable drivers in progress[3] also the Custodia backend support for Castellan.[4] We are planning to start the Castellan driver for oslo.config as soon as we have that interface done. In the next few weeks, that will be the Dublin PTG and we are planning to discuss more this topic in the Oslo session[5], so if you are interested in discussing/contribute for this topic and you will be attending the PTG, please add yourself as an interested person in the topic. Also, we are planning to integrate this whole feature with Tripleo in a near feature, so we are planning to discuss with the Tripleo team a proper way to have that supported as well.[6] Finally, if want to be closer to this topic, or if you want to contribute to this feature, we are having weekly meetings on Tuesday at 1600 UTC on #openstack-meeting-3, we will be glad to have you working with us. [1] https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html [2] https://custodia.readthedocs.io/en/latest/readme.html [3] https://review.openstack.org/#/c/513844/ [4] https://review.openstack.org/#/c/515190/ [5] https://etherpad.openstack.org/p/oslo-ptg-rocky [6] https://etherpad.openstack.org/p/tripleo-ptg-rocky [7] https://etherpad.openstack.org/p/oslo-config-plaintext-secrets Cheers, -- Raildo mascena Software Engineer, Identity Managment Red Hat TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Feb 2 17:56:50 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 2 Feb 2018 11:56:50 -0600 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: <50773bcf-ef48-c92c-4ebc-ef69cb658eb0@redhat.com> References: <50773bcf-ef48-c92c-4ebc-ef69cb658eb0@redhat.com> Message-ID: I apologize for using the "baremetal/VM" name, but I wanted to get an etherpad rolling sooner rather than later [0], since we're likely going to have to decide on a new name in person. I ported the initial ideas Colleen mentioned when she started this thread, added links to previous etherpads from Boston and Denver, and ported some topics from the Boston etherpads. Please feel free to add ideas to the list or elaborate on existing ones. Next week we'll start working through them and figure out what we want to accomplish for the session. Once we have an official room for the discussion, I'll add the etherpad to the list in the wiki. [0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg On 02/02/2018 11:10 AM, Zane Bitter wrote: > On 30/01/18 10:33, Colleen Murphy wrote: >> At the last PTG we had some time on Monday and Tuesday for >> cross-project discussions related to baremetal and VM management. We >> don't currently have that on the schedule for this PTG. There is still >> some free time available that we can ask for[1]. Should we try to >> schedule some time for this? > > +1, I would definitely attend this too. > > - ZB > >>  From a keystone perspective, some things we'd like to talk about with >> the BM/VM teams are: >> >> - Unified limits[2]: we now have a basic REST API for registering >> limits in keystone. Next steps are building out libraries that can >> consume this API and calculate quota usage and limit allocation, and >> developing models for quotas in project hierarchies. Input from other >> projects is essential here. >> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >> problem, and we'd like to guide other projects through the migration. >> - Application credentials[4]: this main part of this work is largely >> done, next steps are implementing better access control for it, which >> is largely just a keystone team problem but we could also use this >> time for feedback on the implementation so far >> >> There's likely some non-keystone-related things that might be at home >> in a dedicated BM/VM room too. Do we want to have a dedicated day or >> two for these projects? Or perhaps not dedicated days, but >> planned-in-advance meeting time? Or should we wait and schedule it >> ad-hoc if we feel like we need it? >> >> Colleen >> >> [1] >> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true >> [2] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >> [3] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html >> [4] >> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From armamig at gmail.com Fri Feb 2 18:33:16 2018 From: armamig at gmail.com (Armando M.) Date: Fri, 2 Feb 2018 10:33:16 -0800 Subject: [openstack-dev] [neutron] cycle highlights for sub-projects Message-ID: Hi neutrinos, RC1 is fast approaching and this time we can add highlights to the release files [1]. If I can ask you anyone interested in contributing to the highlights: please review [2]. Miguel and I will make sure they are compiled correctly. We have time until Feb 9 to get this done. Many thanks, Armando [1] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html [2] https://review.openstack.org/#/c/540476/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Feb 2 18:43:38 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 2 Feb 2018 12:43:38 -0600 Subject: [openstack-dev] [oslo] PTL candidacy Message-ID: Hi, I am submitting my candidacy for Oslo PTL. I have been an Oslo core since 2014 and although my involvement in the project has at times been limited by other responsibilities, I have always kept up on what is going on in Oslo. For the Rocky cycle my primary goals would be: * Continue to maintain the stability and quality of the existing Oslo code. * Help drive the oslo.config improvements that are underway. * Encourage new and existing contributors to ensure the long-term health of the project. I am, of course, always open to suggestions on other areas of focus for Oslo. Thanks. -Ben From miguel at mlavalle.com Fri Feb 2 19:18:32 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 2 Feb 2018 13:18:32 -0600 Subject: [openstack-dev] [neutron][neutron-fwaas] Request for inclusion of bug fixes in RC In-Reply-To: <5BB26A17-88EA-445D-9542-AA1E3EDCF300@cisco.com> References: <62f528a7d6b1470cb4efcad96670ae58@G07SGEXCMSGPS03.g07.fujitsu.local> <5BB26A17-88EA-445D-9542-AA1E3EDCF300@cisco.com> Message-ID: Hi, It was granted earlier today during the Neutron drivers meeting Cheers On Fri, Feb 2, 2018 at 12:22 AM, Sridar Kandaswamy (skandasw) < skandasw at cisco.com> wrote: > Thanks An. The team has been working with An to review and validate these > changes – we believe we are close to the final version and should be able > to merge by tomorrow barring any unforeseen surprises. So pls consider > adding these to the RC as they address some critical issues as outlined > below. > > Thanks > > Sridar > > On 2/1/18, 10:12 PM, "AnNP at vn.fujitsu.com" wrote: > > Hi, > > I would like to request inclusion of the following patches which > address bugs found in our testing. > > https://review.openstack.org/#/c/539461/ > Addressing: https://bugs.launchpad.net/neutron/+bug/1746404 > > 'auto_associate_default_firewall_group' got an error when new port is > created > We started with a CfgOpt to Disable default FWG on ports. This has > caused issues with Conntrack so this option is being removed. Also on a > related note, we were mistakenly applying on other ports - so tightened up > the validation to ensure that it is a VM port. > > And > https://review.openstack.org/#/c/536234/ > Addressing: https://bugs.launchpad.net/neutron/+bug/1746855 > > FWaaS V2 failures with Ml2 is Linuxbridge or security group driver is > iptables_hybrid > We have failures with Linuxbridge as it is not a supported option and > if SG uses iptables_hybrid driver - we have seen issues which possibly > might be addressed [1], but with not enough validation we would like to > prevent this scenario as well. With more testing and addressing any issues > we can remove the restriction on SG with iptables_hybrid driver in the R > release. > > [1] https://review.openstack.org/#/c/538154/ > > Cheers, > An > > ____________________________________________________________ > ______________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 2 19:23:06 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 2 Feb 2018 19:23:06 +0000 Subject: [openstack-dev] Remembering Shawn Pearce (fwd) In-Reply-To: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> References: <20180202155224.dhvhcnx32mncbtgs@pacific.linksys.moosehall> Message-ID: <20180202192305.sr6m4ujcfrqqjjz3@yuggoth.org> On 2018-02-02 15:52:24 +0000 (+0000), Adam Spiers wrote: > Since git and Gerrit are at the heart of our development process, > I am passing on this very sad news from the git / Gerrit > communities that Shawn Pearce has passed away after an aggressive > lung cancer. [...] Thanks for sending this along, Adam. Our community owes Shawn (and his family for that matter) a great debt of gratitude, and not just for the software he's written. Many are the times when spearce helped me out personally over IRC with Gerrit-related issues, even though I'm certain he could have been spending his time on more interesting endeavors instead. He will be missed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Feb 2 21:25:28 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 02 Feb 2018 13:25:28 -0800 Subject: [openstack-dev] [All] Gerrit User Study Message-ID: <1517606728.1471454.1257704888.1A3C6F2E@webmail.messagingengine.com> Google is scheduling 60 minute Gerrit user research sessions to help shape the future of Gerrit. If you are interested in providing feedback to Gerrit as users this is a good opportunity to do so. More info can be found at this google group thread https://groups.google.com/forum/#!topic/repo-discuss/F_Qv1R_JtOI. Thank you (and sorry for the spam but Gerrit is an important tool for us, your input is valuable), Clark From twilson at redhat.com Fri Feb 2 21:59:42 2018 From: twilson at redhat.com (Terry Wilson) Date: Fri, 2 Feb 2018 15:59:42 -0600 Subject: [openstack-dev] [release][requirements][FFE] Release ovsdbapp 0.9.1 Message-ID: ovsdbapp 0.9.1 (review https://review.openstack.org/#/c/539489/) has a gate-fixing one-line fix (https://review.openstack.org/#/c/537241). Can I get a FFE for bumping the requirements to ovsdbapp 0.9.1 once the package is built? Terry From andrea.frittoli at gmail.com Fri Feb 2 22:24:49 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 02 Feb 2018 22:24:49 +0000 Subject: [openstack-dev] [QA] Rocky QA PTL candidacy Message-ID: Dear all, I’d like to announce my candidacy [1] for PTL of the QA Program for the Rocky cycle. I served as PTL for QA during the Pike and Queens cycles, and I would be honoured to continue to do so in the next six months. After a few years working with the OpenStack community, I continue to find it an exceptional experience and a great opportunity for meeting and working with great people, learning and innovating. In the past cycle, we focused on providing good and stable interfaces in QA projects for everyone to use. Meanwhile, we supported the OpenStack community in the implementation of the Tempest plugin community goal. This should mean fewer headaches for everyone with Tempest plugins, and a bit more time for the QA team to focus on key areas like the gate stability and bug triage. Outside of those key areas, my priority always remains serving the community, by providing tools, support and advice. There are a few specific topics I care particularly about for the Rocky cycle: - Migration to Zuul v3: my key objective is for project teams to be able to migrate as effortless as possible, enjoy the benefits of Zuul v3 and focus on the things they want to work on. To achieve this the QA team will provide a good set of base jobs and ansible roles for everyone to re-use. During Queens we implemented base devstack and devstack-tempest jobs already; next up are multinode support and grenade, which cover most of the things that we do in Tempest legacy jobs today. - Interoperability testing: with the new add-on programs, I expect that the teams involved will need prompt support and test reviews from the QA team. - QA beyond the gate: there is a lot of quality engineering happening on OpenStack beyond the testing we do in the gate and I strive to ensure that those efforts do not happen in isolation. There is opportunity for sharing of ideas, tools, experiences - even beyond the OpenStack community, through initiatives like OpenLab and OPNVF. A QA SIG may be a good forum to make this happen. - Supporting teams working on the cold upgrade goal along with the goal champion. Thank you! Andrea Frittoli (andreaf) [1] https://review.openstack.org/#/c/540542/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Fri Feb 2 22:30:49 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 2 Feb 2018 16:30:49 -0600 Subject: [openstack-dev] [release][requirements][FFE] Release ovsdbapp 0.9.1 In-Reply-To: References: Message-ID: <20180202223049.zf43uz5vb2ehcau6@gentoo.org> On 18-02-02 15:59:42, Terry Wilson wrote: > ovsdbapp 0.9.1 (review https://review.openstack.org/#/c/539489/) has a > gate-fixing one-line fix (https://review.openstack.org/#/c/537241). > Can I get a FFE for bumping the requirements to ovsdbapp 0.9.1 once > the package is built? > Is this just for upper-constraints.txt or for global-requirements.txt as well? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Fri Feb 2 22:34:33 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 2 Feb 2018 16:34:33 -0600 Subject: [openstack-dev] [release][ptl] Missing and old intermediary projects Message-ID: <20180202223433.GA20855@sm-xps> Hey all, Sending this kind of late on a Friday, but I will also include this information in the weekly countdown email. Just hoping to increase the chances of it getting seen. One of our release models is cycle-with-intermediary. With this type of project, the projects are able to do full releases at any time, with the commitment "to produce a release near the end of the 6-month development cycle to be used with projects using the other cycle-based release models". Ideally, this means these projects will have one or more releases during the development cycle, and will have a final release leading up to the RC1 deadline. This "final" release is then used to cut a stable/queens branch for the project. Well, the RC1 milestone is coming up next Thursday, and we have a few projects following this release model that have not done any release yet for Queens. There are other projects that have done a Queens release, but it has been awhile since those were done, so we're not really sure if they are intended to be the last official release for Queens. For those without a release - if nothing is done in time - the release team will need to force a release off of HEAD to be able to create the stable/queens branch. For those with old Queens releases - unless we hear otherwise, we will need to use the point of that last release to cut stable/queens for those repos. The release team would rather not be the ones to decide when projects are released, nor be the ones to decide what becomes stable/queens for these projects. Please make every effort to release and/or branch these projects before next Thurday's deadline. The projects with existing but old Queens releases are: - swift - storlets - monasca / monasca-log-api The projects that have not yet done a Queens intermediary release are: - aodh, ceilometer, panko - heat-translator - ironic-ui - monasca-kibana-plugin, monasca-thresh - murano-agent - patrole - tacker-horizon - tripleo-quickstart - zun, zun-ui For some of these, it might make sense to switch to a different release model. Some of the more mature ones may be better as "independent". If you have any questions or problems that the release team can help with, please come see us in the #openstack-release channel. Thanks, Sean (smcginnis) From johnsomor at gmail.com Fri Feb 2 23:26:31 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 2 Feb 2018 15:26:31 -0800 Subject: [openstack-dev] [Octavia] Rocky Octavia PTL candidacy Message-ID: My fellow OpenStack community, I would like to nominate myself for Octavia PTL for Rocky. I am currently the PTL for the Queens release series and would like to continue helping our team provide network load balancing services for OpenStack. For those of you that do not know me, I work for Rackspace. Prior to joining Rackspace I worked for Hewlett-Packard for fifteen years on data center automation, distributed network systems, embedded system design, and big data. In the Queens release, we were able to add support for batch member updates, QoS on the load balancer VIPs, support for Castellan, operator tooling, and many more enhancements. Beyond that work, we laid the ground work for Rocky. Looking forward to Rocky I expect the team to finish out some major new features, such as Active/Active load balancers, UDP protocol support, provider drivers, flavors, additional tempest tests and additional operator tooling. I plan to continue working on improving our documentation, specifically with detailed installation, high availability, and neutron-lbaas migration guides. Thank you for your support of Octavia during Queens and your consideration for Rocky, Michael Johnson (johnsom) From twilson at redhat.com Sat Feb 3 00:02:38 2018 From: twilson at redhat.com (Terry Wilson) Date: Fri, 2 Feb 2018 18:02:38 -0600 Subject: [openstack-dev] [release][requirements][FFE] Release ovsdbapp 0.9.1 In-Reply-To: <20180202223049.zf43uz5vb2ehcau6@gentoo.org> References: <20180202223049.zf43uz5vb2ehcau6@gentoo.org> Message-ID: On Fri, Feb 2, 2018 at 4:30 PM, Matthew Thode wrote: > On 18-02-02 15:59:42, Terry Wilson wrote: >> ovsdbapp 0.9.1 (review https://review.openstack.org/#/c/539489/) has a >> gate-fixing one-line fix (https://review.openstack.org/#/c/537241). >> Can I get a FFE for bumping the requirements to ovsdbapp 0.9.1 once >> the package is built? >> > > Is this just for upper-constraints.txt or for global-requirements.txt as > well? A global-requirements.txt probably makes more sense since 0.9.0 introduced the issue. I don't see any reason why someone would want to install it over 0.9.1. It's literally a one-line difference between the two. From prometheanfire at gentoo.org Sat Feb 3 00:44:21 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 2 Feb 2018 18:44:21 -0600 Subject: [openstack-dev] [release][requirements][FFE] Release ovsdbapp 0.9.1 In-Reply-To: References: <20180202223049.zf43uz5vb2ehcau6@gentoo.org> Message-ID: <20180203004421.hgdj3yl7iyu7qwsa@gentoo.org> On 18-02-02 18:02:38, Terry Wilson wrote: > On Fri, Feb 2, 2018 at 4:30 PM, Matthew Thode wrote: > > On 18-02-02 15:59:42, Terry Wilson wrote: > >> ovsdbapp 0.9.1 (review https://review.openstack.org/#/c/539489/) has a > >> gate-fixing one-line fix (https://review.openstack.org/#/c/537241). > >> Can I get a FFE for bumping the requirements to ovsdbapp 0.9.1 once > >> the package is built? > >> > > > > Is this just for upper-constraints.txt or for global-requirements.txt as > > well? > > A global-requirements.txt probably makes more sense since 0.9.0 > introduced the issue. I don't see any reason why someone would want to > install it over 0.9.1. It's literally a one-line difference between > the two. > It doesn't look like there are any re-releases that'd occur because of this, so you have my signoff. http://codesearch.openstack.org/?q=ovsdbapp&i=nope&files=.*requirements.*&repos= -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Sat Feb 3 01:54:28 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 2 Feb 2018 20:54:28 -0500 Subject: [openstack-dev] [tripleo] tripleo-ci-centos-7-scenario00[1-2]-multinode-oooq-container failing Message-ID: Greetings, Jobs: tripleo-ci-centos-7-scenario001-multinode-oooq-container tripleo-ci-centos-7-scenario002-multinode-oooq-container The above jobs have been failing in check and gate over the past 24 hours. The fix is posted here [1] which reverts [2] Thanks [1] https://review.openstack.org/#/c/540543/ [2] https://review.openstack.org/#/c/537375/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cnjie0616 at gmail.com Sat Feb 3 04:38:47 2018 From: cnjie0616 at gmail.com (YUAN RUIJIE) Date: Sat, 3 Feb 2018 12:38:47 +0800 Subject: [openstack-dev] [Senlin] [PTL] PTL nomination for Senlin In-Reply-To: <201801311558218822444@zte.com.cn> References: <201801311558218822444@zte.com.cn> Message-ID: +1 Thanks for taking up responsibility to lead the team!!! 2018-01-31 15:58 GMT+08:00 : > > Hi all > > I'd like to announce my candidacy for the PTL role of Senlin Project for > > Rocky cycle. > > > I began to contribute to Senlin project since Mitaka and joined the team as > > a core reviewer in 2016.10. It is my pleasure to work with the great team > > to make this project better and better, and we will keep moving and look > > forward to push Senlin to the next level. > > > As a clustering service, we already can handle some resource types like > nova > > server, heat stack, NFV VDU etc. in past cycles. We also have done a lot of > > great works in Queue cycle, for example we finished k8s on Senlin feature's > > demo[1][2][3][4]. And there are still many works need to do in future. > > > As a PTL in Rocky cycle, I'd like to focus on the tasks as follows: > > > * Promote k8s on Senlin feature implementation and make it use in NFV > > For example: > > - Add ability to do actions on cluster creation/deletion. > > - Add more network interfaces in drivers. > > - Add kubernetes master profile, use kubeadm to setup one master node. > > - Add kubernetes node profile, auto retrieve kubernetes data from master > > cluster. > > * Improve health policy to support more useful auto-healing scenario > > * Improve LoadBalance policy when use Octavia service driver > > * Improve runtime data processing inside Senlin server > > * A better support for EDGE-Computing unattended operation use cases[5] > > * A stronger team to take the Senlin project to its next level. > > > Again, it is my pleasure to work with such a great team. > > > Thanks > > XueFeng Liu > > > [1]https://review.openstack.org/#/c/515321/ > > [2]https://v.qq.com/x/page/i05125sfonh.html > > [3]https://v.qq.com/x/page/t0512vo6tw1.html > > [4]https://v.qq.com/x/page/y0512ehqiiq.html > > [5]https://www.openstack.org/videos/boston-2017/integration-of-enterprise- > monitoring-product-senlin-and-mistral-for-auto-healing > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Feb 3 11:07:25 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 3 Feb 2018 12:07:25 +0100 Subject: [openstack-dev] Qinling package description (was: Technical Committee Status update, February 2nd) In-Reply-To: References: Message-ID: <2716beba-4965-4029-b1f0-30bf27e22925@debian.org> On 02/02/2018 11:52 AM, Thierry Carrez wrote: > == Recently-approved changes == > > * New project team: Qinling (Function as a Service) [1] > * Goal updates: ironic > > [1] https://review.openstack.org/#/c/533827/ Sorry for this usual "no description" ranting, but I believe it's for the best. While Qinling seems a nice project, its description is IMO not very descriptive. I had to go on the AWS website to understand what AWS Lambda is. Nowhere, I could read what type of language Qinling supports. While I understand that a just born project cannot have a meaningful documentation, almost no project description isn't going to make it very attractive for new contributors. Could we get this improved? Cheers, Thomas Goirand (zigo) From anlin.kong at gmail.com Sat Feb 3 11:54:46 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sun, 4 Feb 2018 00:54:46 +1300 Subject: [openstack-dev] Qinling package description (was: Technical Committee Status update, February 2nd) In-Reply-To: <2716beba-4965-4029-b1f0-30bf27e22925@debian.org> References: <2716beba-4965-4029-b1f0-30bf27e22925@debian.org> Message-ID: Hi, Thomas, Sorry for the inconvenience this new project brings to you, your ranting is welcomed. Currently, you can only refer to http://qinling.readthedocs.io/ for some limited information about Qinling. I know lacking documentation is always a problem for open source project, but we are trying our best to provide more information in the near future, especially given it's an official project now. You are also welcomed for contribution if you like, which is always appreciated. As for your question, only Python programming language is supported for now in the upstream, but I recommend you do your own runtime implementation if you are the cloud provider with your own cloud security consideration. Actually, the runtime part is also pluggable in the codebase. Again, documentation and more programming language support are ​ ​ definitely two of the high priorities during Rocky dev cycle. ​ ​ Your feedback is important to us, feel free to pop up in #openstack-qinling for chatting. Cheers, Lingxian Kong (Larry) On Sun, Feb 4, 2018 at 12:07 AM, Thomas Goirand wrote: > On 02/02/2018 11:52 AM, Thierry Carrez wrote: > > == Recently-approved changes == > > > > * New project team: Qinling (Function as a Service) [1] > > * Goal updates: ironic > > > > [1] https://review.openstack.org/#/c/533827/ > > Sorry for this usual "no description" ranting, but I believe it's for > the best. > > While Qinling seems a nice project, its description is IMO not very > descriptive. I had to go on the AWS website to understand what AWS > Lambda is. Nowhere, I could read what type of language Qinling supports. > While I understand that a just born project cannot have a meaningful > documentation, almost no project description isn't going to make it very > attractive for new contributors. > > Could we get this improved? > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Sat Feb 3 16:02:00 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Sat, 3 Feb 2018 17:02:00 +0100 Subject: [openstack-dev] [magnum] Rocky Magnum PTL candidacy Message-ID: Dear Stackers, I would like to nominate myself as PTL for the Magnum project for the Rocky cycle. I have been consistently contributing to Magnum since February 2016 and I am a core reviewer since August 2016. Since then, I have contributed to significant features like cluster drivers, add Magnum tests to Rally (I'm core reviewer to rally to help the rally team with Magnum related reviews), wrote Magnum's installation tutorial and served as docs liaison for the project. My latest contributions include the swarm-mode driver, containerization of the heat-agent and the remaining kubernetes components, fixed the long standing problem of adding custom CAs to the clusters and brought the kubernetes driver up to date, with RBAC configuration and the latest kubernetes dashboard. I have been the release liaison for Magnum for Pike and served as PTL for the Queens release. I have contributed a lot in Magnum's CI jobs (adding multi-node, DIB and new driver jobs). I have been working closely with other projects consumed by Magnum like Heat, Fedora Atomic, kubernetes python client and kubernetes rpms. Despite the slow down on development due shortage of contributions, we managed to keep the project up to date and increase the user base. For the next cycle, I want to enable the Magnum team to complete the work on cluster upgrades, cluster federation, cluster auto-healing, support for different container runtimes and container network backends. Thanks for considering me, Spyros Trigazis [0] https://git.openstack.org/cgit/openstack/election/tree/candidates/rocky/Magnum/strigazi.txt?id=7a31af003f1be68ee81229c8c828716838e5b8dd -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.sauthier at objectif-libre.com Sun Feb 4 10:36:33 2018 From: christophe.sauthier at objectif-libre.com (Christophe Sauthier) Date: Sun, 04 Feb 2018 11:36:33 +0100 Subject: [openstack-dev] [cloudkitty] Rocky Cloudkitty PTL candidacy Message-ID: <0dafeaea17a5bf6d7e8cfe1465772e13@objectif-libre.com> Hello everyone, I would like to announce my candidacy for PTL of Cloudkitty. During the Queens cycle we have been able open relaunch our community with the integration of a few regular contributors and new core. We also have been to change the way we configure Cloudkitty and the definition of metrics to be fetched in order to be more agile (and less hard-coded relationship). With that evolution we have been able to extend to spectrum of Cloudkitty to other projects (both within and outside OpenStack). During the Rocky cycle the focus I am looking for is to continue the expand the spectrum of cloukitty integration with various services. We also have some work that we are planning our storage concepts. Finally we plan to improve the reports that can get fetched from cloudkitty (both graphical and outputs). Finally I am decided also to continue to work to support the wider ecosystem adoption of Cloudkitty as the best solution for chargeback and rating. I would also like to take this opportunity to thank all members of the OpenStack community who helped our team during the lasts cycles. Thank you, Christophe Sauthier ---- Christophe Sauthier CEO Objectif Libre : Au service de votre Cloud +33 (0) 6 16 98 63 96 | christophe.sauthier at objectif-libre.com www.objectif-libre.com | @objectiflibre | www.linkedin.com/company/objectif-libre Recevez la Pause Cloud Et DevOps : olib.re/abo-pause From mordred at inaugust.com Sun Feb 4 15:32:25 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sun, 4 Feb 2018 09:32:25 -0600 Subject: [openstack-dev] [sdk][release][requirements] FFE request for openstacksdk 0.11.3 Message-ID: <3ebd7552-10af-707c-d7e3-8a1054ead4f8@inaugust.com> Hi! I'd like to request another FFE to fix several neutron commands in python-openstackclient for queens and also to unbreak python-openstackclient's gate. The release proposal patch is here: https://review.openstack.org/540657 The issue at hand was: The osc-functional-devstack-tips job, which tests master changes of openstackclient and openstacksdk against each other with openstackclient's functional tests was broken and was not testing master against master but rather master of openstackclient against released version of SDK. Therefore, the gate that was protecting us against breakages like these was incorrect and let us land a patch that made invalid query parameters raise errors instead of silently not filtering - without also adding missing but needed query parameters as valid. The gate job has been fixed and SDK as of the proposed commit fixes the osc-functional-devstack-tips job. That can be seen in https://review.openstack.org/540554/ The osc-functional-devstack job, which checks OSC master against released SDK is broken with sdk 0.11.2 because of the bug fixed in the SDK patch. We would want to bump the upper-constraints from 0.11.2 to 0.11.3 in both stable/queens and master upper-constraints files. Thanks! Monty From dtantsur at redhat.com Sun Feb 4 16:19:34 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Sun, 4 Feb 2018 17:19:34 +0100 Subject: [openstack-dev] [ironic] Not running for PTL this cycle Message-ID: <0b962c10-5cd3-9892-5ea2-fc2bcd28735e@redhat.com> Hi all, I guess it's quite obvious already, but I won't be running for PTL position this time. It's been a challenging and interesting journey, I've learned a lot, and I believe we've achieved a lot together. Now I'd like to get back to calm waters and allow others to driver the project forward :) Of course I'm not going anywhere far, and I'm ready to help whoever gets this chair with their new challenge. Now a small request: please leave me anonymous feedback at https://goo.gl/forms/810u3j8Yh2fymUMG2 that'll help me to improve further :) Thank you all, Dmitry From aj at suse.com Sun Feb 4 16:43:04 2018 From: aj at suse.com (Andreas Jaeger) Date: Sun, 4 Feb 2018 17:43:04 +0100 Subject: [openstack-dev] [trove] Retiring the trove-integration repository, final call In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: On 2018-01-26 22:22, Manoj Kumar wrote: > Initial indication was provided in July last year, that the > trove-integration repository was going away. > All the elements have been merged into trove, and are being maintained > there. > > I do not believe anyone spoke up then. If anyone is depending on the > separate repository, do speak up. Please remove it completely from the CI system - follow https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project * abandon these two open reviews: https://review.openstack.org/#/q/project:openstack/trove-integration+is:open * Remove the project from project-config Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From prometheanfire at gentoo.org Sun Feb 4 19:35:41 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sun, 4 Feb 2018 13:35:41 -0600 Subject: [openstack-dev] [sdk][release][requirements] FFE request for openstacksdk 0.11.3 In-Reply-To: <3ebd7552-10af-707c-d7e3-8a1054ead4f8@inaugust.com> References: <3ebd7552-10af-707c-d7e3-8a1054ead4f8@inaugust.com> Message-ID: <20180204193541.2fzcfw27jt5evgan@gentoo.org> On 18-02-04 09:32:25, Monty Taylor wrote: > Hi! > > I'd like to request another FFE to fix several neutron commands in > python-openstackclient for queens and also to unbreak > python-openstackclient's gate. > > The release proposal patch is here: > > https://review.openstack.org/540657 > > The issue at hand was: > > The osc-functional-devstack-tips job, which tests master changes of > openstackclient and openstacksdk against each other with > openstackclient's functional tests was broken and was not testing > master against master but rather master of openstackclient against > released version of SDK. Therefore, the gate that was protecting us > against breakages like these was incorrect and let us land a patch that made > invalid query parameters raise errors instead of silently not filtering - > without also adding missing but needed query parameters as valid. > > The gate job has been fixed and SDK as of the proposed commit fixes the > osc-functional-devstack-tips job. That can be seen in > https://review.openstack.org/540554/ The osc-functional-devstack job, which > checks OSC master against released SDK is broken with sdk 0.11.2 because of > the bug fixed in the SDK patch. > > We would want to bump the upper-constraints from 0.11.2 to 0.11.3 in both > stable/queens and master upper-constraints files. > As a UC bump you have my +2 It seems like these things require a gr bump even more now, which would cause client re-releases iirc. My question has more to do with having this not happen again. Do you cross gate with other projects (clients)? That would allow you to check what's going into your master with what's in the client master to ensure no breakage. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mordred at inaugust.com Sun Feb 4 19:41:06 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sun, 4 Feb 2018 13:41:06 -0600 Subject: [openstack-dev] [sdk][release][requirements] FFE request for openstacksdk 0.11.3 In-Reply-To: <20180204193541.2fzcfw27jt5evgan@gentoo.org> References: <3ebd7552-10af-707c-d7e3-8a1054ead4f8@inaugust.com> <20180204193541.2fzcfw27jt5evgan@gentoo.org> Message-ID: On 02/04/2018 01:35 PM, Matthew Thode wrote: > On 18-02-04 09:32:25, Monty Taylor wrote: >> Hi! >> >> I'd like to request another FFE to fix several neutron commands in >> python-openstackclient for queens and also to unbreak >> python-openstackclient's gate. >> >> The release proposal patch is here: >> >> https://review.openstack.org/540657 >> >> The issue at hand was: >> >> The osc-functional-devstack-tips job, which tests master changes of >> openstackclient and openstacksdk against each other with >> openstackclient's functional tests was broken and was not testing >> master against master but rather master of openstackclient against >> released version of SDK. Therefore, the gate that was protecting us >> against breakages like these was incorrect and let us land a patch that made >> invalid query parameters raise errors instead of silently not filtering - >> without also adding missing but needed query parameters as valid. >> >> The gate job has been fixed and SDK as of the proposed commit fixes the >> osc-functional-devstack-tips job. That can be seen in >> https://review.openstack.org/540554/ The osc-functional-devstack job, which >> checks OSC master against released SDK is broken with sdk 0.11.2 because of >> the bug fixed in the SDK patch. >> >> We would want to bump the upper-constraints from 0.11.2 to 0.11.3 in both >> stable/queens and master upper-constraints files. >> > > As a UC bump you have my +2 > > It seems like these things require a gr bump even more now, which would > cause client re-releases iirc. My question has more to do with having > this not happen again. Do you cross gate with other projects (clients)? > That would allow you to check what's going into your master with what's > in the client master to ensure no breakage. Yah - we do ... the problem was that we had a bug in this one which caused it to be testing against released versions not master versions. That has been rectified, so we SHOULD be good moving forward. Also - as we find or grow more things that consume SDK, we'll add appropriate cross-gate jobs for them as well. Thanks! Monty From tpb at dyncloud.net Sun Feb 4 21:51:12 2018 From: tpb at dyncloud.net (Tom Barron) Date: Sun, 4 Feb 2018 16:51:12 -0500 Subject: [openstack-dev] [tripleo] FFE nfs_ganesha integration In-Reply-To: References: <7dfdaada-bfae-f4f5-b8d9-e541757585e2@redhat.com> Message-ID: Just to follow up, CI is passing for the three patches outstanding and the last one has a release note for the overall feature. The trick to getting CI to pass was to introduce a new variant Controller role for when we actually deploy with CephNFS and the VIP for the server on the StorageNFS network. Using the variant controller role and '-n' with network_data_ganesha.yaml (1) enables the new feature to work correctly while (2) making the new feature entirely optional so that current CI runs without being affected by it. I think the three outstanding patches here are ready to merge: https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha I want to get them in so they'll show in downstream puddles for QE but my full attention will immediately turn to upstream TripleO CI and doc for this new functionality. In that regard I *think* we'll need Dan Sneddon's work here: https://review.openstack.org/#/c/523638 so that actual deployment of the StorageNFS network doesn't have to involve copying and editing network/config/*/{ceph,compute,controller}/.yaml as done in the DNM patch that I've used for testing actual integration of the feature here: https://review.openstack.org/533767 All said, this one seems to be a good poster child for composable roles + composable networks! -- Tom Barron On Tue, Jan 23, 2018 at 2:48 PM, Emilien Macchi wrote: > I agree this would be a great addition but I'm worried about the > patches which right now don't pass the check pipeline. > Also I don't see any release notes explaining the changes to our users > and it's supposed to improve user experience... > > Please add release notes, make CI passing and we'll probably grant it for > FFE. > > On Mon, Jan 22, 2018 at 8:34 AM, Giulio Fidente > wrote: > > hi, > > > > I would like to request an FFE for the integration of nfs_ganesha, which > > will provide a better user experience to manila users > > > > This work was slown down by a few factors: > > > > - it depended on the migration of tripleo to the newer Ceph version > > (luminous), which happened during the queens cycle > > > > - it depended on some additional functionalities to be implemented in > > ceph-ansible which were only recently been made available to tripleo/ci > > > > - it proposes the addition of on an additional (and optional) network > > (storagenfs) so that guests don't need connectivity to the ceph frontend > > network to be able to use the cephfs shares > > > > The submissions are on review and partially testable in CI [1]. If > accepted, > > I'd like to reassign the blueprint [2] back to the queens cycle, as it > was > > initially. > > > > Thanks > > > > 1. https://review.openstack.org/#/q/status:open+topic:bp/nfs-ganesha > > 2. https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha > > -- > > Giulio Fidente > > GPG KEY: 08D733BA > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simple_hlw at 163.com Mon Feb 5 01:43:35 2018 From: simple_hlw at 163.com (We We) Date: Mon, 5 Feb 2018 09:43:35 +0800 Subject: [openstack-dev] [requirement][cyborg]FFE - pyspdk requirement dependency In-Reply-To: <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> References: <53EADDD3-8A86-445F-A5D9-F5401ABB5309@163.com> <25D76CB6-7EDE-491E-ADAB-6FD4B5B56DAC@163.com> Message-ID: <09E6A8E8-0B65-4B7B-9108-D6FF5911904B@163.com> Hi, Thank you for your kind reply. I'm not thinking enough about this part of my work. I am sorry for that, please close the FFE of the pyspdk. Thanks, Helloway > 在 2018年1月31日,上午1:54,We We 写道: > >> Hi, > >> I have modified and resubmitted pyspdk to the pypi. Please check it. > >> Thx, > >> Helloway > >> 在 2018年1月30日,下午12:52,We We > 写道: >> >> Hi, >> The pyspdk is a important tool library [1] which supports Cyborg SPDK driver [2] to manage the backend SPDK-base app, so we need to upload pyspdk into the pypi [3] and then append 'pyspdk>=0.0.1’ item into ‘OpenStack/Cyborg/requirements.txt’ , so that SPDK driver can be built correctly when zuul runs. However, It's not what we thought it would be, if we want to add the new requirements, we should get support from upstream OpenStack/requirements [4] to append 'pyspdk>=0.0.1’ item. >> >> I'm sorry for propose the request so late. Please Please help. >> >> >> [1] https://review.gerrithub.io/#/c/379741/ >> [2] https://review.openstack.org/#/c/538164/11 >> [3] https://pypi.python.org/pypi/pyspdk/0.0.1 >> [4] https://github.com/openstack/requirements >> >> >> Regards, >> Helloway >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Feb 5 02:42:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 4 Feb 2018 18:42:13 -0800 Subject: [openstack-dev] [tripleo] FFE - Feuture Freeze Exception request for Routed Spine and Leaf Deployment In-Reply-To: <1517570931.6277.15.camel@redhat.com> References: <1517570931.6277.15.camel@redhat.com> Message-ID: On Fri, Feb 2, 2018 at 3:28 AM, Harald Jensås wrote: > Requesting: > Feuture Freeze Exception request for Routed Spine and Leaf Deployment > > Blueprints: > https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- > ironic-inspector > https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- > deployment > > All external dependencies for Routed Spine and Leaf Deployement have > finally landed. (Except puppet module changes.) > > > Pros > ==== > > This delivers a feature that has been requested since the Kilo release. > It makes TripleO more viable in large deployments as well as in edge > use cases where openstack services are not deployed in one datacenter. > > The core piece in this is the neutron segments service_plugin. This has > been around since newton. Most of the instack-undercloud patches were > first proposed during ocata. > > The major change is in the undercloud. In tripleo-heat-templates we > need just a small change to ensure we get ip addresses allocated from > neutron when segments service plug-in is enabled in neutron. The > overcloud configuration stays the same, we already do have users > deploying routed networks in the isolated networks using composable > networks so we know it works. > > > Risks > ===== > > I see little risk introducing a regression to current functionality > with these changes. The major part of the undercloud patches has been > around for a long time and passing CI. > > The format of undercloud.conf is changed, options are deprecated and > new options are added to enable multiple control plane subnets/l2- > segments to be defined. All options are properly deprectated, so > using a configuration file from pike will still work. > > > > ===================================== > The list of patches that need to land > ===================================== > > instack-undercloud > ------------------ > > * Tripleo routed networks ironic inspector, and Undercloud > https://review.openstack.org/#/c/437544/ > * Move ctlplane network/subnet setup to python > https://review.openstack.org/533364 > * Update config to use per network groups > https://review.openstack.org/533365 > * Update validations to validate all subnets > https://review.openstack.org/533366 > * Add support for multiple inspection subnets > https://review.openstack.org/533367 > * Create static routes for remote subnets > https://review.openstack.org/533368 > * Add per subnet network cidr nat rules > https://review.openstack.org/533369 > * Add per subnet masquerading > https://review.openstack.org/533370 > * Install and enable neutron baremetal mech plugin > https://review.openstack.org/537830 > > tripleo-heat-templates > ---------------------- > > * Add subnet property to ctlplane network for server resources > https://review.openstack.org/473817 > > tripleo-docs > ------------ > > * Documentation - TripleO routed-spine-and-leaf > https://review.openstack.org/#/c/539939/ > > puppet-neutron > -------------- > > * Add networking-baremetal ml2 plug-in > https://review.openstack.org/537826 > * Add networking-baremetal - ironic-neutron-agent > https://review.openstack.org/539405 > > I'm a bit concerned by the delay of this request. Feature freeze request deadline was 10 days ago: https://releases.openstack.org/queens/schedule.html#q-ff We're now in the process on producing a release candidate. The amount of code that needs to land to have the feature completed isn't small but it looks like well tested and you seems pretty confident. I'm not sure what to vote on this one tbh because yeah the use-case is super important, and we know how Queens release is important to us. But at the same time there is a risk to introduce problems, and delay the potentially delay the release and after the delivery of other features... I guess I'm ok as long as all patches pass ALL CI jobs without exception and are carefully tested and reviewed. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Mon Feb 5 03:28:19 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 5 Feb 2018 14:28:19 +1100 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: References: Message-ID: <67d294b6-ec7a-d944-8080-4e36b2e8f920@redhat.com> As you said RESTful is not a standard but brings guidelines of good practices. Which in turn doesn't preclude adding ideas, as long as respecting RESTful approach. So we get from both sides. Therefore a good schema structure adds to a de-facto standard, once the practice is commonly used. On 02/02/18 19:11, Duncan Thomas wrote: > So I guess my question here is why is being RESTful good? Sure it's > (very, very loosely) a standard, but what are the actual advantages? > Standards come and go, what we want most of all is a good quality, > easy to use API. > > I'm not saying that going RESTful is wrong, but I don't see much > discussion about what the advantages are, only about how close we are > to implementing it. > > On 1 Feb 2018 10:55 pm, "Ed Leafe" > wrote: > > On Jan 18, 2018, at 4:07 AM, TommyLike Hu > wrote: > > >    Recently We found an issue related to our OpenStack action > APIs. We usually expose our OpenStack APIs by registering them to > our API Gateway (for instance Kong [1]), but it becomes very > difficult when regarding to action APIs. We can not register and > control them seperately because them all share the same request > url which will be used as the identity in the gateway service, not > say rate limiting and other advanced gateway features, take a look > at the basic resources in OpenStack > > We discussed your email at today’s API-SIG meeting [0]. This is an > area that is always contentious in the RESTful world. Actions, > tasks, and state changes are not actual resources, and in a pure > REST design they should never be part of the URL. Instead, you > should POST to the actual resource, with the desired action in the > body. So in your example: > > > URL:/volumes/{volume_id}/action > > BODY:{'extend':{}} > > the preferred way of achieving this is: > > URL: POST /volumes/{volume_id} > BODY: {‘action’: ‘extend’, ‘params’: {}} > > The handler for the POST action should inspect the body, and call > the appropriate method. > > Having said that, we realize that a lot of OpenStack services have > adopted the more RPC-like approach that you’ve outlined. So while > we strongly recommend a standard RESTful approach, if you have > already released an RPC-like API, our advice is: > > a) avoid having every possible verb in the URL. In other words, > don’t use: >   /volumes/{volume_id}/mount >   /volumes/{volume_id}/umount >   /volumes/{volume_id}/extend > This moves you further into RPC-land, and will make updating your > API to a more RESTful design more difficult. > > b) choose a standard term for the item in the URL. In other words, > always use ‘action’ or ‘task’ or whatever else you have adopted. > Don’t mix terminology. Then pass the action to perform, along with > any parameters in the body. This will make it easier to transition > to a RESTful design by later updating the handlers to first > inspect the BODY instead of relying upon the URL to determine what > action to perform. > > You might also want to contact the Kong developers to see if there > is a way to work with a RESTful API design. > > -- Ed Leafe > > [0] > http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-01-16.02.log.html#l-28 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Mon Feb 5 04:01:32 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Mon, 5 Feb 2018 12:01:32 +0800 Subject: [openstack-dev] [oslo] PTL candidacy In-Reply-To: References: Message-ID: Thanks for stepping up to take the role, Ben looking forward to making oslo better with your lead . 2018-02-03 2:43 GMT+08:00 Ben Nemec : > Hi, > > I am submitting my candidacy for Oslo PTL. > > I have been an Oslo core since 2014 and although my involvement in the > project > has at times been limited by other responsibilities, I have always kept up > on > what is going on in Oslo. > > For the Rocky cycle my primary goals would be: > > * Continue to maintain the stability and quality of the existing Oslo code. > > * Help drive the oslo.config improvements that are underway. > > * Encourage new and existing contributors to ensure the long-term health of > the project. > > I am, of course, always open to suggestions on other areas of focus for > Oslo. > > Thanks. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From tengqim at linux.vnet.ibm.com Mon Feb 5 04:44:13 2018 From: tengqim at linux.vnet.ibm.com (Qiming Teng) Date: Mon, 5 Feb 2018 12:44:13 +0800 Subject: [openstack-dev] [CI][Keystone][Requirements][Release] What happened to the gate on Feb 4th? Message-ID: <20180205044412.GA3974@qiming-ThinkCentre-M58p> Starting about 24 hours ago, we have been notified CI gate failure although we haven't changed anything to our project locally. Before that we have spent quite some time making the out-of-tree tempest plugins work on gate. After checking the log again and again ... we found the following logs from Keystone: Feb 05 03:31:12.609492 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-dfcbf106-fbf5-41bd-9012-3c65d1de5f9a None admin] Could not find project: service.: ProjectNotFound: Could not find project: service. Feb 05 03:31:13.845694 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-50feed46-7c15-425d-bec7-1b4a7ccf6859 None admin] Could not find service: clustering.: ServiceNotFound: Could not find service: clustering. Feb 05 03:31:12.552647 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-0a5e660f-dad6-4779-aea4-dd6969c728e6 None admin] Could not find domain: Default.: DomainNotFound: Could not find domain: Default. Feb 05 03:31:12.441128 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-7eb9ed90-28fc-40aa-8a41-d560f7a156c9 None admin] Could not find user: senlin.: UserNotFound: Could not find user: senlin. Feb 05 03:31:12.336572 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-19e52d02-5471-49a2-8acd-360199d8c6e0 None admin] Could not find role: admin.: RoleNotFound: Could not find role: admin. Feb 05 03:28:33.797665 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-544cd822-18a4-4f7b-913d-297716418239 None admin] Could not find user: glance.: UserNotFound: Could not find user: glance. Feb 05 03:28:29.993214 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING py.warnings [None req-dc411d9c-6ab9-44e3-9afb-20e5e7034f12 None admin] /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:865: UserWarning: Policy identity:create_endpoint failed scope check. The token used to make the request was project scoped but the policy requires ['system'] scope. This behavior may change in the future where using the intended scope is required Feb 05 03:28:29.920892 ubuntu-xenial-ovh-gra1-0002362092 devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None req-32a4a378-d6d3-411e-9842-2178e577af27 None admin] Could not find service: compute.: ServiceNotFound: Could not find service: compute. .... ---------------------- So I'm wondering what the hack happened? Keystone version bump? Devstack changed? Tempest settings changed? Why are we merging these changes near the end of a cycle when people are focusing on stabilizing things? Any hints on these are highly appreciated. - Qiming From renat.akhmerov at gmail.com Mon Feb 5 06:22:15 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 5 Feb 2018 13:22:15 +0700 Subject: [openstack-dev] [mistral][ptl] PTL candidacy Message-ID: <0e240670-03ce-46ed-ab65-aa3caecc3ecc@Spark> Hi, I'm Renat Akhmerov. I'm running for PTL of Mistral in Rocky. Mistral is a workflow service developed within the OpenStack community from the ground up. In queens we mainly focused on bugfixing, improving performance and documentation. Performance was again significantly improved (~100%) by optimizing DB operations and data schema (mostly additional indexex) and using caching technics. We also made Mistral more robust in various failure situations. To achieve that we came up with a number of protection mechanisms. The two other noticeable features we added are: * We can now start a Mistral workflow based on an existing workflow   execution, no matter if it's still running or finished. Given an ID of   an execution Mistral copies all needed parameters (input, env etc.) and   creates a new execution. * When creating a workflow execution, we can now pass an ID of the new   execution. If an execution with this ID already exists the REST endpoint   just returns details of this execution as if it was GET operation. If   not, it create a execution with this ID. Thus creation of workflow   execution can be idempotent. For the next cycle I'd like to propose the following roadmap: * Keep improving multi-node mode and HA * Rearchitect Mistral Scheduler, make it more suitable for HA * Optimize ‘join’ tasks * Close all the gaps in the documentation and restructure it so it is more   convenient to read and navigate * Usability   * New CLI/API (more consistent and human friendly interface)   * Debugging workflows   * Workflow failure analysis (error messages, navigate through nested     workflows etc.) * Refactor Actions subsystem   * Actions testability   * Move OpenStack actions into mistral-extra and with better test coverage     and usability Some of those items have now been in progress for a few months. We keep working on them and I hope most of them will be completed in the next cycle. Should you have any ideas on these points we're always happy to discuss and correct our plans. We're always happy to get new contributors on the project and always ready to help people interested in Mistral development get up to speed. The best way to get in touch with us is IRC channel #openstack-mistral. The corresponding patch to openstack/election: https://review.openstack.org/#/c/540720/ Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From luckyvega.g at gmail.com Mon Feb 5 06:44:23 2018 From: luckyvega.g at gmail.com (Vega Cai) Date: Mon, 05 Feb 2018 06:44:23 +0000 Subject: [openstack-dev] [tricircle] Rocky Tricircle PTL candidacy Message-ID: Hi folks, I would like to announce my self nomination for the PTL candidacy in Tricircle Rocky cycle. My name is Zhiyuan Cai, and my IRC handle is zhiyuan. I am currently the PTL of Tricircle for Queens cycle and have been actively participating in the development of this project since Mitaka cycle. During the Queens cycle, we begin to bring QoS and LBaas support to Tricircle, test scenario coverage is improved in our smoke test, we also start to solve the resource deletion reliability problem and have figured out a solution. For the coming Rocky cycle, here are some works we can focus on: * The QoS and LBaas features are not fully supported in Queens cycle so we can improve them in Rocky cycle. * Implement the new cross-Neutron L3 networking model that doesn't depend on host routes. * Finish the resource deletion reliability solution. Hope everyone will enjoy running and developing Tricircle. Thank you for your kind consideration of my candidacy. BR Zhiyuan Cai -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Feb 5 07:00:07 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 5 Feb 2018 12:30:07 +0530 Subject: [openstack-dev] [glance] FFE request for --check feature In-Reply-To: References: Message-ID: We have discussed this in glance weekly meeting [1] and most of the core reviewers are inclined towards accepting this FFE. +1 from my side as this --check command will be very helpful for operators. Thank you Bhagyashri for working on this. Abhishek Kekane On Wed, Jan 31, 2018 at 7:29 PM, Shewale, Bhagyashri < Bhagyashri.Shewale at nttdata.com> wrote: > Hi Glance Folks, > > I'm requesting an Feature Freeze Exception for the lite-spec > http://specs.openstack.org/openstack/glance-specs/specs/ > untargeted/glance/lite-spec-db-sync-check.html > which is implemented by https://review.openstack.org/#/c/455837/8/ > > Regards, > Bhagyashri Shewale > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Feb 5 07:16:05 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 5 Feb 2018 08:16:05 +0100 Subject: [openstack-dev] [CI][Keystone][Requirements][Release] What happened to the gate on Feb 4th? In-Reply-To: <20180205044412.GA3974@qiming-ThinkCentre-M58p> References: <20180205044412.GA3974@qiming-ThinkCentre-M58p> Message-ID: <9da6b759-b234-484e-4dc6-bff2a8b9872b@suse.com> On 2018-02-05 05:44, Qiming Teng wrote: > Starting about 24 hours ago, we have been notified CI gate failure > although we haven't changed anything to our project locally. Before that > we have spent quite some time making the out-of-tree tempest plugins > work on gate. What is *your* project? PLease provide also links to full failure logs, Andreas > After checking the log again and again ... we found the following logs > from Keystone: > > Feb 05 03:31:12.609492 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-dfcbf106-fbf5-41bd-9012-3c65d1de5f9a None admin] Could not find > project: service.: ProjectNotFound: Could not find project: service. > > Feb 05 03:31:13.845694 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-50feed46-7c15-425d-bec7-1b4a7ccf6859 None admin] Could not find > service: clustering.: ServiceNotFound: Could not find service: > clustering. > > Feb 05 03:31:12.552647 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-0a5e660f-dad6-4779-aea4-dd6969c728e6 None admin] Could not find > domain: Default.: DomainNotFound: Could not find domain: Default. > > Feb 05 03:31:12.441128 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-7eb9ed90-28fc-40aa-8a41-d560f7a156c9 None admin] Could not find > user: senlin.: UserNotFound: Could not find user: senlin. > > Feb 05 03:31:12.336572 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-19e52d02-5471-49a2-8acd-360199d8c6e0 None admin] Could not find > role: admin.: RoleNotFound: Could not find role: admin. > > Feb 05 03:28:33.797665 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-544cd822-18a4-4f7b-913d-297716418239 None admin] Could not find > user: glance.: UserNotFound: Could not find user: glance. > > Feb 05 03:28:29.993214 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING py.warnings [None > req-dc411d9c-6ab9-44e3-9afb-20e5e7034f12 None admin] > /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:865: > UserWarning: Policy identity:create_endpoint failed scope check. The > token used to make the request was project scoped but the policy > requires ['system'] scope. This behavior may change in the future where > using the intended scope is required > > Feb 05 03:28:29.920892 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-32a4a378-d6d3-411e-9842-2178e577af27 None admin] Could not find > service: compute.: ServiceNotFound: Could not find service: compute. > > .... > > ---------------------- > > So I'm wondering what the hack happened? Keystone version bump? > Devstack changed? Tempest settings changed? > Why are we merging these changes near the end of a cycle when people are > focusing on stabilizing things? > > Any hints on these are highly appreciated. -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From renat.akhmerov at gmail.com Mon Feb 5 07:48:50 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 5 Feb 2018 14:48:50 +0700 Subject: [openstack-dev] [mistral] Proposing time slots for Mistral office hours Message-ID: <9580a64c-095b-49dd-a117-8f4e4a200022@Spark> Hi, Not so long ago we decided to stop holding weekly meetings in one of the general IRC channel (it was #openstack-meeting-3 for the last several months). The main reason was that we usually didn’t have a good representation of the team there because the team is distributed across the world. We tried to find a time slot several times that would work well for all the team members but failed to. Another reason is that we didn’t always have a clear reason to gather because everyone was just focused on their tasks and a discussion wasn’t much needed so a meeting was even a distraction. However, despite all this we still would like channels to communicate, the team members and people who have user questions and/or would like to start contributing. Similarly to other teams in OpenStack we’d like to try the “Office hours” concept. If we follow it we’re supposed to have team members, for whom the time slot is OK, available in our channel #openstack-mistral during certain hours. These hours can be used for discussing our development stuff between team members from different time zones and people outside the team would know when they can come and talk to us. Just to start the discussion on what the office hours time slots could be I’m proposing the following time slots: 1. Mon 16.00 UTC (it used to be our time of weekly meetings) 2. Wed 3.00 UTC 3. Fri 8.00 UTC Each slot is one hour. Assumingly, #1 would be suitable for people in Europe and America. #2 for people in Asia and America. And #3 for people living in Europe and Asia. At least that was my thinking when I was wondering what the time slots should be. Please share your thoughts on this. The idea itself and whether the time slots look ok. Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Feb 5 07:58:37 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 5 Feb 2018 08:58:37 +0100 Subject: [openstack-dev] [all][infra] Automatically generated Zuul changes (topic: zuulv3-projects) In-Reply-To: <87shalgb8x.fsf@meyer.lemoncheese.net> References: <87shalgb8x.fsf@meyer.lemoncheese.net> Message-ID: Please accept these changes so that they don't have to be created for the stable/queens branch, Andreas On 2018-01-31 18:59, James E. Blair wrote: > Hi, > > Occasionally we will make changes to the Zuul configuration language. > Usually these changes will be backwards compatible, but whether they are > or not, we still want to move things forward. > > Because Zuul's configuration is now spread across many repositories, it > may take many changes to do this. I'm in the process of making one such > change now. > > Zuul no longer requires the project name in the "project:" stanza for > in-repo configuration. Removing it makes it easier to fork or rename a > project. > > I am using a script to create and upload these changes. Because changes > to Zuul's configuration use more resources, I, and the rest of the infra > team, are carefully monitoring this and pacing changes so as not to > overwhelm the system. This is a limitation we'd like to address in the > future, but we have to live with now. > > So if you see such a change to your project (the topic will be > "zuulv3-projects"), please observe the following: > > * Go ahead and approve it as soon as possible. > > * Don't be strict about backported change ids. These changes are only > to Zuul config files, the stable backport policy was not intended to > apply to things like this. > > * Don't create your own versions of these changes. My script will > eventually upload changes to all affected project-branches. It's > intentionally a slow process, and attempting to speed it up won't > help. But if there's something wrong with the change I propose, feel > free to push an update to correct it. -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From akekane at redhat.com Mon Feb 5 08:56:46 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 5 Feb 2018 14:26:46 +0530 Subject: [openstack-dev] [glance] FFE request for --check feature In-Reply-To: References: Message-ID: Sorry, Forgot to add meeting logs link in previous mail. Here it is; http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-02-01-14.01.log.html#l-164 Thank you, Abhishek Kekane On Mon, Feb 5, 2018 at 12:30 PM, Abhishek Kekane wrote: > We have discussed this in glance weekly meeting [1] and most of the core > reviewers are inclined towards accepting this FFE. > > +1 from my side as this --check command will be very helpful for operators. > > Thank you Bhagyashri for working on this. > > Abhishek Kekane > > On Wed, Jan 31, 2018 at 7:29 PM, Shewale, Bhagyashri < > Bhagyashri.Shewale at nttdata.com> wrote: > >> Hi Glance Folks, >> >> I'm requesting an Feature Freeze Exception for the lite-spec >> http://specs.openstack.org/openstack/glance-specs/specs/unta >> rgeted/glance/lite-spec-db-sync-check.html >> which is implemented by https://review.openstack.org/#/c/455837/8/ >> >> Regards, >> Bhagyashri Shewale >> >> ______________________________________________________________________ >> Disclaimer: This email and any attachments are sent in strictest >> confidence >> for the sole use of the addressee and may contain legally privileged, >> confidential, and proprietary data. If you are not the intended recipient, >> please advise the sender by replying promptly to this email and then >> delete >> and destroy this email and any attachments without any further use, >> copying >> or forwarding. >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 5 09:07:31 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Feb 2018 10:07:31 +0100 Subject: [openstack-dev] [oslo] PTL candidacy In-Reply-To: References: Message-ID: <64926206-12cc-99d6-5234-db165c3f199a@openstack.org> Ben Nemec wrote: > I am submitting my candidacy for Oslo PTL. Thanks Ben for stepping up ! -- Thierry From paul.bourke at oracle.com Mon Feb 5 10:12:19 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 5 Feb 2018 10:12:19 +0000 Subject: [openstack-dev] [kolla] Rocky PTL candidacy Message-ID: <3f76509b-39fb-fa2c-1d29-0d82478fc788@oracle.com> Hello all, I've been involved with Kolla since it's early stages around Liberty, where we saw it evolve through multiple iterations of image formats, orchestration methods and patterns into the project we know and love. From my perspective the community is one of the best things about Kolla. You are the ones that keep config files up to date as OpenStack evolves, continue to implement new roles and images, keep the gates up and running, the list goes on. With this mind, I won't list a bunch of features that I'd like to accomplish for Rocky. Rather, I would like to spend time listening to what you as users would like to see in the project, and doing whatever I possibly can to help you achieve that. This does not mean I don't have a vision for Kolla - no project is perfect, and there are plenty of areas I think could use some refinement. My hope is through discussion and collaboration we can continue to iterate to ensure this project is as useful as possible to our users. I hope you will consider me to serve you as your PTL for the coming cycle. Thank you! -Paul From strigazi at gmail.com Mon Feb 5 10:23:04 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 5 Feb 2018 11:23:04 +0100 Subject: [openstack-dev] [magnum] New meeting time Tue 1000UTC Message-ID: Hello, Heads up, the containers team meeting has changed from 1600UTC to 1000UTC. See you there tomorrow at #openstack-meeting-alt ! Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Mon Feb 5 10:24:02 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 5 Feb 2018 10:24:02 +0000 Subject: [openstack-dev] [monasca] PTL candidacy Message-ID: Hello everyone, I would like to announce my candidacy to continue as PTL of Monasca for the Rocky release. I have worked for the project as core reviewer since 2015, acted as Release Management Liaison in Ocata and Pike and had a privilege of being PTL in Queens release cycle. I have learnt a lot in this new role and it's a real pleasure to work with the great team and improve the project. Thank you for all your support. In the next release I would like to focus on following topics: * continue the work on Cassandra support * strengthen the community and improve active participation and contribution * improve tenant monitoring * accomplish Python 3 migration Apart from that I'll do my best to promote Monasca, coordinate community work and interact with other OpenStack teams. Thank you for considering my candidacy and I'm looking forward to another very productive cycle. Best greetings Witek From berendt at betacloud-solutions.de Mon Feb 5 10:25:15 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Mon, 5 Feb 2018 11:25:15 +0100 Subject: [openstack-dev] [kolla] Rocky PTL candidacy In-Reply-To: <3f76509b-39fb-fa2c-1d29-0d82478fc788@oracle.com> References: <3f76509b-39fb-fa2c-1d29-0d82478fc788@oracle.com> Message-ID: Hello Paul. > On 5. Feb 2018, at 11:12, Paul Bourke wrote: > > This does not mean I don't have a vision for Kolla - no project is perfect Regardless of that, I would be interested in your visions. What specifically do you want to tackle in the next cycle in kolla? What should be the focus? Christian. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From duonghq at vn.fujitsu.com Mon Feb 5 10:32:11 2018 From: duonghq at vn.fujitsu.com (duonghq at vn.fujitsu.com) Date: Mon, 5 Feb 2018 10:32:11 +0000 Subject: [openstack-dev] [kolla] [ptl] Rocky PTL candidacy Message-ID: Hello everybody, Kolla is already in production-grade state for a deployment system. In my country, I helped one media company using Kolla to deploy OpenStack cluster in production and I'll have chance to help another company in Vietnam using Kolla in production in the near future. I joined Kolla team from Newton, and I always remember how much help I got from Kolla PTL, core-reviewer and all other members. I'm serving as core reviewer from Pike cycle. I have contributed many blueprints, bugs report and fix from the first days I joined Kolla team [1][2][3][4]. >From the first day, I am impressed by the diversity of Kolla team, so we get many ideas for new feature, bug fix and code review. For Rocky cycle, I would like to focus on the following goals: * Focus on feedback from Kolla users, their needs and also hassle. * Improve Kolla documentation, keep it update with the code. * Encourage diversity in our community. * Improve cross community communication. * Implement upgrade procedure for OpenStack services [5] * Reduce upgrade time to zero downtime upgrade for OpenStack service. * Start fast forward upgrade support (the 7th point in [6]) * Bring upgrade test to our CI and improve existed facets. * Implement nodes change feature for Kolla (start with remove node feature). * Bring Kolla-kubernetes to 1.0 release. Last but not least, I want to introduce Kolla to many users, companies, encourage core reviewer membership, prioritize pending features and many other activities as PTL responsibilities. Thank you for reading this long email and please consider it as my PTL candidacy. And I hope you give me one chance to serve as your PTL for the Rocky cycle. [1] https://blueprints.launchpad.net/kolla [2] https://blueprints.launchpad.net/kolla-ansible [3] https://bugs.launchpad.net/kolla/ [4] https://bugs.launchpad.net/kolla-ansible/ [5] https://blueprints.launchpad.net/kolla-ansible/+spec/apply-service-upgrade-procedure [6] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html Best regards, Ha Quang Duong (Mr.) PODC - Fujitsu Vietnam Ltd. From lhinds at redhat.com Mon Feb 5 10:45:25 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 5 Feb 2018 10:45:25 +0000 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: On Tue, Jan 30, 2018 at 2:11 PM, Thierry Carrez wrote: > Thierry Carrez wrote: > > Here is the proposed pre-allocated track schedule for the Dublin PTG: > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60- > z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/ > pubhtml?gid=1374855307&single=true > > Following feedback I made small adjustments to Kuryr and > OpenStack-Charms allocations. The track schedule is about to be > published on the event website, so now is your last chance to signal > critical issues with it! > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hi Thierry, I had been monitoring for PTG room allocations, but I missed this email which was the important one. The security SIG plans to meet at the PTG to discuss several topics. I am to late to get our inclusion? Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Mon Feb 5 11:04:32 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 5 Feb 2018 11:04:32 +0000 Subject: [openstack-dev] [kolla] Rocky PTL candidacy In-Reply-To: References: <3f76509b-39fb-fa2c-1d29-0d82478fc788@oracle.com> Message-ID: <49127d98-1898-aa13-9340-297d591f2391@oracle.com> On 05/02/18 10:25, Christian Berendt wrote: > Hello Paul. > >> On 5. Feb 2018, at 11:12, Paul Bourke wrote: >> >> This does not mean I don't have a vision for Kolla - no project is perfect > > > Regardless of that, I would be interested in your visions. What specifically do you want to tackle in the next cycle in kolla? What should be the focus? > > Christian. > Hi Christian, Sure thing :) To sum it up I would like to see us focus on Kolla in production environments. This is the mission of the project, and we still have ways to go. Specifically: * Improving our tooling. Currently kolla-ansible (as in the shell script) is very simplistic and requires operators to go under to the hood for basic things such as checking the health of their cloud, diffing configs, viewing logs, etc. [0] * Related to the above, we need improved monitoring in Kolla. * Finish the zero downtime upgrade work. * Resolving issues around configuration [1]. We need to decide how much we want to provide and make it as straight forward as possible for operators to override. * Documentation should continue to be a priority. * Finally, I would like to start the discussion of moving each element of Kolla out into separate projects. In particular I think this needs to happen with kolla-kubernetes but potentially the images also. Each of these are areas that I've specifically heard from real world operators, and also I think are key to the future and overall health of the project. If you'd like to discuss any in more detail please give me a shout at any time. -Paul [0] https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli [1] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126663.html From honjo.rikimaru at po.ntt-tx.co.jp Mon Feb 5 11:54:20 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 5 Feb 2018 20:54:20 +0900 Subject: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true In-Reply-To: References: <1515074711-sup-5593@lrrr.local> <165d1214-d0af-b634-6a29-c3e3afe52797@po.ntt-tx.co.jp> <1515514211-sup-4244@lrrr.local> Message-ID: <496c2576-395e-c5b1-28d9-1de5c4289f7e@po.ntt-tx.co.jp> I tried to replace pyinotify to inotify, but same error was occurred. I'm asking about the behavior of inotify to the developer of inotify. I wrote the detail of my status on Launchpad: https://bugs.launchpad.net/masakari/+bug/1740111/comments/4 On 2018/01/31 20:03, Rikimaru Honjo wrote: > Hello, > > Sorry for the very late reply... > > On 2018/01/10 1:11, Doug Hellmann wrote: >> Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900: >>> Hello, >>> >>> On 2018/01/04 23:12, Doug Hellmann wrote: >>>> Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: >>>>> Hello, >>>>> >>>>> The below bug was reported in Masakari's Launchpad. >>>>> I think that this bug was caused by oslo.log. >>>>> (And, the root cause is a bug of pyinotify using by oslo.log. The detail is >>>>> written in the bug report.) >>>>> >>>>> * masakari-api failed to launch due to setting of watch_log_file and log_file >>>>>      https://bugs.launchpad.net/masakari/+bug/1740111 >>>>> >>>>> There is a possibility that this bug will affects all openstack components using oslo.log. >>>>> (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. >>>>> I haven't solved the reason of this yet...) >>>>> >>>>> Could you help us? >>>>> And, what should we do...? >>>>> >>>>> [1] >>>>> e.g. nova-api, cinder-api, keystone... >>>>> >>>>> Best regards, >>>> >>>> The bug is in pyinotify. According to the git repo [1] that project >>>> was last updated in June of 2015.  I recommend we move off of >>>> pyinotify entirely, since it appears to be unmaintained. >>>> >>>> If there is another library to do the same thing we should switch >>>> to it (there seem to be lots of options [2]). If there is no viable >>>> replacement or fork, we should deprecate that log watching feature >>>> (and anything else for which we use pyinotify) and remove it ASAP. >>>> >>>> We'll need a volunteer to do the evaluation and update oslo.log. >>>> >>>> Doug >>>> >>>> [1] https://github.com/seb-m/pyinotify >>>> [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search >>> Thank you for replying. >>> >>> I haven't deeply researched, but inotify looks good. >>> Because "weight" of inotify is the largest, and following text is described. >>> >>> https://pypi.python.org/pypi/inotify/0.2.9 >>>> This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. >>> PyInotify is defunct and no longer available... >>> >> >> The inotify package seems like a good candidate to replace pyinotify. >> >> Have you looked at how hard it would be to change oslo.log? If so, does >> using the newer library eliminate the bug you had? > I am researching it now. (But, I think it is not easy.) > I'll create a patch if inotify can eliminate the bug. > > >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From aschultz at redhat.com Mon Feb 5 12:29:19 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 5 Feb 2018 05:29:19 -0700 Subject: [openstack-dev] [tripleo] Rocky PTL candidacy Message-ID: I would like to nominate myself for the TripleO PTL role for the Rocky cycle. As PTL of TripleO for the Queens cycle, the focus was on improving containerized services, improving the deployment process and CI, and improving visibility of the status of the project. I personally believe over the last cycle we've made great strides on all these fronts. For Rocky, I would like to continue to focus on: * Reducing duplication and tech debt When we switched over to containerization, we've had to implement some items in multiple places to support backwards compatibility. I believe it's time to spend some efforts to reduce duplication of code and processes and focus on simplifying actions for the end user. An example of this will be efforts to align the undercloud and overcloud deployment processes. * Simplifying the deployment process Additionally with the containerization switch, we've added new requirements for actions that must be performed by the end user to deploy OpenStack. I believe we should spend time looking at what actions we can remove or reduce by automating them as part of the deployment process. An example of this will be efforts to enable autodiscovery for the nodes on the undercloud as well as switching to the config-download by default. * Continued efforts around CI We've made great strides in stablizing the CI as well as implementing zuul v3. We need to continue to move our CI into fully native zuul v3 actions and focus on developers ability to reproduce CI outside of the upstream. Thanks, Alex Schultz irc: mwhahaha From saverio.proto at switch.ch Mon Feb 5 13:44:06 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Mon, 5 Feb 2018 14:44:06 +0100 Subject: [openstack-dev] [horizon] collectstatic with custom theme is broken at least since Ocata Message-ID: Hello, I have tried to find a fix to this: https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/ https://bugs.launchpad.net/horizon/+bug/1744239 https://review.openstack.org/#/c/536039/ I am upgrading from Newton to Pike. Here the real question is: how is it possible that this bug was found so late ??? There is at least another operator that documented hitting this bug in Ocata. Probably this bug went unnoticed because you hit it only if you have customizations for Horizon. All the automatic testing does not notice this bug. What I cannot undestand is. - are we two operators hitting a corner case ? - No one else uses Horizon with custom themes in production with version newer than Newton ? This is all food for your brainstorming about LTS,bugfix branches, release cycle changes.... Cheers, Saverio -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From mateusz.kowalski at cern.ch Mon Feb 5 13:54:51 2018 From: mateusz.kowalski at cern.ch (Mateusz Kowalski) Date: Mon, 5 Feb 2018 13:54:51 +0000 Subject: [openstack-dev] [horizon] collectstatic with custom theme is broken at least since Ocata In-Reply-To: References: Message-ID: <38631337-B70D-488D-A9E5-F59693EEE942@cern.ch> Hi, We are running Horizon in Pike and cannot confirm having the same problem as you describe. We are using a custom theme however the folder structure is a bit different than the one you presented in the bug report. In our case we have - /usr/share/openstack-dashboard/openstack_dashboard/themes |-- cern |-- default |-- material what means we do not modify at all files inside "default". Let me know if you want to compare more deeply our changes to see where the problem comes from, as for us "theme_file.split('/templates/')" does not cause the trouble. Cheers, Mateusz > On 5 Feb 2018, at 14:44, Saverio Proto wrote: > > Hello, > > I have tried to find a fix to this: > > https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/ > https://bugs.launchpad.net/horizon/+bug/1744239 > https://review.openstack.org/#/c/536039/ > > I am upgrading from Newton to Pike. > > Here the real question is: how is it possible that this bug was found so > late ??? > > There is at least another operator that documented hitting this bug in > Ocata. > > Probably this bug went unnoticed because you hit it only if you have > customizations for Horizon. All the automatic testing does not notice > this bug. > > What I cannot undestand is. > - are we two operators hitting a corner case ? > - No one else uses Horizon with custom themes in production with > version newer than Newton ? > > This is all food for your brainstorming about LTS,bugfix branches, > release cycle changes.... > > Cheers, > > Saverio > > > -- > SWITCH > Saverio Proto, Peta Solutions > Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland > phone +41 44 268 15 15, direct +41 44 268 1573 > saverio.proto at switch.ch, http://www.switch.ch > > http://www.switch.ch/stories > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Mon Feb 5 14:15:18 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Feb 2018 15:15:18 +0100 Subject: [openstack-dev] [requirements][release] FFE for sushy bug-fix release Message-ID: <477e36e2-5624-f139-04b4-d816b6b52636@redhat.com> Hi all, I'm requesting an exception to proceed with the release of the sushy library. To my best knowledge, the library is only consumed by ironic and at least one other vendor support library which is outside of the official governance. The release request is [1]. It addresses a last minute bug in the authentication code, without it authentication will not work in some cases. Thanks, Dmitry [1] https://review.openstack.org/540824 P.S. We really need a feature freeze period for libraries to avoid this.. But it cannot be introduced with the current library release freeze. Another PTG topic? :) From rosmaita.fossdev at gmail.com Mon Feb 5 14:21:32 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 5 Feb 2018 09:21:32 -0500 Subject: [openstack-dev] [glance] FFE request for --check feature In-Reply-To: References: Message-ID: Thanks for following up on this, Abhishek. After our discussion approving this at the weekly meeting, I completely forgot to send out an update. As Abhishek indicated, the discussion was positive, and this FFE is APPROVED. cheers, brian On Mon, Feb 5, 2018 at 3:56 AM, Abhishek Kekane wrote: > Sorry, Forgot to add meeting logs link in previous mail. > > Here it is; > http://eavesdrop.openstack.org/meetings/glance/2018/glance.2018-02-01-14.01.log.html#l-164 > > Thank you, > > Abhishek Kekane > > On Mon, Feb 5, 2018 at 12:30 PM, Abhishek Kekane wrote: >> >> We have discussed this in glance weekly meeting [1] and most of the core >> reviewers are inclined towards accepting this FFE. >> >> +1 from my side as this --check command will be very helpful for >> operators. >> >> Thank you Bhagyashri for working on this. >> >> Abhishek Kekane >> >> On Wed, Jan 31, 2018 at 7:29 PM, Shewale, Bhagyashri >> wrote: >>> >>> Hi Glance Folks, >>> >>> I'm requesting an Feature Freeze Exception for the lite-spec >>> http://specs.openstack.org/openstack/glance-specs/specs/untargeted/glance/lite-spec-db-sync-check.html >>> which is implemented by https://review.openstack.org/#/c/455837/8/ >>> >>> Regards, >>> Bhagyashri Shewale >>> >>> ______________________________________________________________________ >>> Disclaimer: This email and any attachments are sent in strictest >>> confidence >>> for the sole use of the addressee and may contain legally privileged, >>> confidential, and proprietary data. If you are not the intended >>> recipient, >>> please advise the sender by replying promptly to this email and then >>> delete >>> and destroy this email and any attachments without any further use, >>> copying >>> or forwarding. >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Mon Feb 5 14:26:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 5 Feb 2018 08:26:09 -0600 Subject: [openstack-dev] [requirements][release] FFE for sushy bug-fix release In-Reply-To: <477e36e2-5624-f139-04b4-d816b6b52636@redhat.com> References: <477e36e2-5624-f139-04b4-d816b6b52636@redhat.com> Message-ID: <20180205142609.4sfsuommr5qx2h6g@gentoo.org> On 18-02-05 15:15:18, Dmitry Tantsur wrote: > Hi all, > > I'm requesting an exception to proceed with the release of the sushy > library. To my best knowledge, the library is only consumed by ironic and at > least one other vendor support library which is outside of the official > governance. The release request is [1]. It addresses a last minute bug in > the authentication code, without it authentication will not work in some > cases. > > Thanks, > Dmitry > > [1] https://review.openstack.org/540824 > > P.S. > We really need a feature freeze period for libraries to avoid this.. But it > cannot be introduced with the current library release freeze. Another PTG > topic? :) > As discussed on IRC you have my ack -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From debacker.fred at gmail.com Mon Feb 5 14:29:30 2018 From: debacker.fred at gmail.com (Fred De Backer) Date: Mon, 5 Feb 2018 15:29:30 +0100 Subject: [openstack-dev] [api] Openstack API and HTTP caching Message-ID: Hi there, I recently hit an issue where I was using Terraform through an HTTP proxy (enforced by my company IT) to provision some resources in an Openstack cloud. Since creating the resources took some time, the initial response from openstack was "still creating...". Further polling of the resource status resulted in receiving *cached* copies of "still creating..." from the proxy until time-out. RFC7234 that describes HTTP caching states that in absence of all headers describing the lifetime/validity of the response, heuristic algorithms may be applied by caches to guesstimate an appropriate value for the validity of the response... (Who knows what is implemented out there...) See: the HTTP caching RFC section 4.2.2 . The API responses describe the current state of an object which isn't permanent, but has a limited validity. In fact very limited as the state of an object might change any moment. Therefore it is my opinion that the Openstack API (Nova in this case, but equally valid for all other APIs) should be responsible to include proper HTTP headers in their responses to either disallow caching of the response or at least limit it's validity. See the HTTP caching RFC section 5 for headers that could be used to accomplish that. For sake of completeness; also see https://github.com/gophercloud /gophercloud/issues/727 for my initial client-side fix and related discussion with client-side project owners... Regards, Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Feb 5 14:46:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 5 Feb 2018 08:46:32 -0600 Subject: [openstack-dev] [CI][Keystone][Requirements][Release] What happened to the gate on Feb 4th? In-Reply-To: <20180205044412.GA3974@qiming-ThinkCentre-M58p> References: <20180205044412.GA3974@qiming-ThinkCentre-M58p> Message-ID: On 02/04/2018 10:44 PM, Qiming Teng wrote: > Starting about 24 hours ago, we have been notified CI gate failure > although we haven't changed anything to our project locally. Before that > we have spent quite some time making the out-of-tree tempest plugins > work on gate. > > After checking the log again and again ... we found the following logs > from Keystone: > > Feb 05 03:31:12.609492 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-dfcbf106-fbf5-41bd-9012-3c65d1de5f9a None admin] Could not find > project: service.: ProjectNotFound: Could not find project: service. > > Feb 05 03:31:13.845694 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-50feed46-7c15-425d-bec7-1b4a7ccf6859 None admin] Could not find > service: clustering.: ServiceNotFound: Could not find service: > clustering. > > Feb 05 03:31:12.552647 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-0a5e660f-dad6-4779-aea4-dd6969c728e6 None admin] Could not find > domain: Default.: DomainNotFound: Could not find domain: Default. > > Feb 05 03:31:12.441128 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-7eb9ed90-28fc-40aa-8a41-d560f7a156c9 None admin] Could not find > user: senlin.: UserNotFound: Could not find user: senlin. > > Feb 05 03:31:12.336572 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-19e52d02-5471-49a2-8acd-360199d8c6e0 None admin] Could not find > role: admin.: RoleNotFound: Could not find role: admin. > > Feb 05 03:28:33.797665 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-544cd822-18a4-4f7b-913d-297716418239 None admin] Could not find > user: glance.: UserNotFound: Could not find user: glance. > > Feb 05 03:28:29.993214 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING py.warnings [None > req-dc411d9c-6ab9-44e3-9afb-20e5e7034f12 None admin] > /usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:865: > UserWarning: Policy identity:create_endpoint failed scope check. The > token used to make the request was project scoped but the policy > requires ['system'] scope. This behavior may change in the future where > using the intended scope is required > > Feb 05 03:28:29.920892 ubuntu-xenial-ovh-gra1-0002362092 > devstack at keystone.service[24845]: WARNING keystone.common.wsgi [None > req-32a4a378-d6d3-411e-9842-2178e577af27 None admin] Could not find > service: compute.: ServiceNotFound: Could not find service: compute. These are all WARNINGS messages. If this is a tempest run, these are probably from negative testing [0], in which case keystone is doing the correct thing. The warnings you've pasted are also present in successful tempest runs [1]. Can you provide a link to a patch that's failing? What project do you work on? [0] https://github.com/openstack/tempest/blob/master/tempest/api/identity/admin/v3/test_projects_negative.py [1] http://logs.openstack.org/57/540557/2/check/tempest-full/bbd7cdd/controller/logs/screen-keystone.txt.gz?level=WARNING > > .... > > ---------------------- > > So I'm wondering what the hack happened? Keystone version bump? > Devstack changed? Tempest settings changed? > Why are we merging these changes near the end of a cycle when people are > focusing on stabilizing things? The original feature freeze date was 10 days ago [2] and with the condition of the gate during that time, there were several projects trailing with merging feature. Keystone was one of them and we issued feature freeze exceptions for those efforts [3] [4] [5]. Based on the warnings you've reported, I'm not convinced any of those efforts are affecting CI in a negative way, especially since we're still getting support into tempest to test those features. [2] https://releases.openstack.org/queens/schedule.html#q-keystone-ffreeze [3] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126587.html [4] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126588.html [5] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126589.html > Any hints on these are highly appreciated. > > - Qiming > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gang.sungjin at gmail.com Mon Feb 5 15:05:01 2018 From: gang.sungjin at gmail.com (SungJin Kang) Date: Tue, 6 Feb 2018 00:05:01 +0900 Subject: [openstack-dev] [OpenStack-I18n] [I18n][PTL] PTL nomination for I18n In-Reply-To: <489844e70f170a35b658e860671ada5d@arcor.de> References: <489844e70f170a35b658e860671ada5d@arcor.de> Message-ID: +1 lol 2018-01-31 4:07 GMT+09:00 Frank Kloeker : > This is my announcement for re-candidacy as I18n PTL in Rocky Cycle. > > The time from the last cycle passed very fast. I had to manage all the > things that a PTL expects. But we documented everything very well and I > always had the full support of the team. I asked the team and it would > continue to support me, which is why I take the chance again. > This is the point to say thank you to all that we have achieved many > things and we are a great community! > > Now it's time to finish things: > > 1. Zanata upgrade. We are in the middle of the upgrade process. The dev > server is sucessfull upgraded and the new Zanata versions fits all our > requirements to automate things more and more. > Now we are in the hot release phase and when it's over, the live > upgrade can start. > > 2. Translation check site. A little bit out of scope in Queens release > because of lack of resources. We'll try this again in Rocky. > > 3. Aquire more people to the team. That will be the main part of my work > as PTL in Rocky. We've won 3 new language teams in the last cycle and > can Openstack serve in Indian, Turkish and Esperanto. There is even more > potential for strengthening existing teams or creating new ones. > For this we have great OpenStack events in Europe this year, at least > the Fall Summit in Berlin. We plan workshops and presentations. > > The work of the translation team is also becoming more colorful. We have > project documentation translation in the order books, translation user > survey and white papers for working groups. > > We are well prepared, but we also look to the future, for example how > AI-programming can support us in the translation work. > > If the plan suits you, I look forward to your vote. > > Frank > > Email: eumel at arcor.de > IRC: eumel8 > Twitter: eumel_8 > > OpenStack Profile: > https://www.openstack.org/community/members/profile/45058/frank-kloeker > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 5 15:07:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Feb 2018 16:07:03 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> Message-ID: <34c23477-5508-417c-9193-26399a506f11@openstack.org> Luke Hinds wrote: > I had been monitoring for PTG room allocations, but I missed this email > which was the important one. > > The security SIG plans to meet at the PTG to discuss several topics. I > am to late to get our inclusion? Not too late, but obviously less choice... Would you be interested in a full day on Monday ? What room size do you need ? -- Thierry Carrez (ttx) From lbragstad at gmail.com Mon Feb 5 15:15:28 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 5 Feb 2018 09:15:28 -0600 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <34c23477-5508-417c-9193-26399a506f11@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> Message-ID: Colleen started a thread asking if there was a need for a baremetal/vm group session [0], which generated quite a bit of positive response. Is there still a possibility of fitting that in on either Monday or Tuesday? The group is usually pretty large. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html On 02/05/2018 09:07 AM, Thierry Carrez wrote: > Luke Hinds wrote: >> I had been monitoring for PTG room allocations, but I missed this email >> which was the important one. >> >> The security SIG plans to meet at the PTG to discuss several topics. I >> am to late to get our inclusion? > Not too late, but obviously less choice... Would you be interested in a > full day on Monday ? What room size do you need ? > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lhinds at redhat.com Mon Feb 5 15:16:25 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 5 Feb 2018 15:16:25 +0000 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <34c23477-5508-417c-9193-26399a506f11@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> Message-ID: On Mon, Feb 5, 2018 at 3:07 PM, Thierry Carrez wrote: > Luke Hinds wrote: > > I had been monitoring for PTG room allocations, but I missed this email > > which was the important one. > > > > The security SIG plans to meet at the PTG to discuss several topics. I > > am to late to get our inclusion? > > Not too late, but obviously less choice... Would you be interested in a > full day on Monday ? What room size do you need ? > > -- > Thierry Carrez (ttx) > A full day would be great, and room does not need to be large - I expect between 5 to 10. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Mon Feb 5 15:23:59 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 5 Feb 2018 15:23:59 +0000 Subject: [openstack-dev] [mistral] Proposing time slots for Mistral office hours In-Reply-To: <9580a64c-095b-49dd-a117-8f4e4a200022@Spark> References: <9580a64c-095b-49dd-a117-8f4e4a200022@Spark> Message-ID: On 5 February 2018 at 07:48, Renat Akhmerov wrote: > Hi, > > Not so long ago we decided to stop holding weekly meetings in one of the > general IRC channel (it was #openstack-meeting-3 for the last several > months). The main reason was that we usually didn’t have a good > representation of the team there because the team is distributed across the > world. We tried to find a time slot several times that would work well for > all the team members but failed to. Another reason is that we didn’t always > have a clear reason to gather because everyone was just focused on their > tasks and a discussion wasn’t much needed so a meeting was even a > distraction. > > However, despite all this we still would like channels to communicate, the > team members and people who have user questions and/or would like to start > contributing. > > Similarly to other teams in OpenStack we’d like to try the “Office hours” > concept. If we follow it we’re supposed to have team members, for whom the > time slot is OK, available in our channel #openstack-mistral during certain > hours. These hours can be used for discussing our development stuff between > team members from different time zones and people outside the team would > know when they can come and talk to us. > > Just to start the discussion on what the office hours time slots could be > I’m proposing the following time slots: > > 1. Mon 16.00 UTC (it used to be our time of weekly meetings) > 2. Wed 3.00 UTC > 3. Fri 8.00 UTC > > These sounds good to me. I should be able to regularly attend the Monday and Friday slots. I think we should ask Mistral cores to try and attend at least one of these a week. > > > Each slot is one hour. > > Assumingly, #1 would be suitable for people in Europe and America. #2 for > people in Asia and America. And #3 for people living in Europe and Asia. At > least that was my thinking when I was wondering what the time slots should > be. > > Please share your thoughts on this. The idea itself and whether the time > slots look ok. > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Mon Feb 5 15:31:38 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 5 Feb 2018 10:31:38 -0500 Subject: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days In-Reply-To: References: Message-ID: Hi everyone, We've started planning the deployment with the Kolla team, you can see the etherpad from the "operator" perspective here: https://etherpad.openstack.org/p/kolla-rdo-m3 We'll advertise the test days and how users can participate soon. Thanks, David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard wrote: > Hi ! > > For those who might be unfamiliar with the RDO [1] community project: > we hang out in #rdo, we don't bite and we build vanilla OpenStack > packages. > > These packages are what allows you to leverage one of the deployment > projects such as TripleO, PackStack or Kolla to deploy on CentOS or > RHEL. > The RDO community collaborates with these deployment projects by > providing trunk and stable packages in order to let them develop and > test against the latest and the greatest of OpenStack. > > RDO test days typically happen around a week after an upstream > milestone has been reached [2]. > The purpose is to get everyone together in #rdo: developers, users, > operators, maintainers -- and test not just RDO but OpenStack itself > as installed by the different deployment projects. > > We tried something new at our last test day [3] and it worked out great. > Instead of encouraging participants to install their own cloud for > testing things, we supplied a cloud of our own... a bit like a limited > duration TryStack [4]. > This lets users without the operational knowledge, time or hardware to > install an OpenStack environment to see what's coming in the upcoming > release of OpenStack and get the feedback loop going ahead of the > release. > > We used Packstack for the last deployment and invited Packstack cores > to deploy, operate and troubleshoot the installation for the duration > of the test days. > The idea is to rotate between the different deployment projects to > give every interested project a chance to participate. > > Last week, we reached out to Kolla to see if they would be interested > in participating in our next RDO test days [5] around February 8th. > We supply the bare metal hardware and their core contributors get to > deploy and operate a cloud with real users and developers poking > around. > All around, this is a great opportunity to get feedback for RDO, Kolla > and OpenStack. > > We'll be advertising the event a bit more as the test days draw closer > but until then, I thought it was worthwhile to share some context for > this new thing we're doing. > > Let me know if you have any questions ! > > Thanks, > > [1]: https://www.rdoproject.org/ > [2]: https://www.rdoproject.org/testday/ > [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/ > [4]: http://trystack.org/ > [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] From thierry at openstack.org Mon Feb 5 15:32:07 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Feb 2018 16:32:07 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> Message-ID: <3cea864b-c7a9-5e7a-5ad5-ddb1b5d1771e@openstack.org> Luke Hinds wrote: > On Mon, Feb 5, 2018 at 3:07 PM, Thierry Carrez > wrote: > > Luke Hinds wrote: > > I had been monitoring for PTG room allocations, but I missed this email > > which was the important one. > > > > The security SIG plans to meet at the PTG to discuss several topics. I > > am to late to get our inclusion? > > Not too late, but obviously less choice... Would you be interested in a > full day on Monday ? What room size do you need ? > > -- > Thierry Carrez (ttx) > > > A full day would be great, and room does not need to be large - I expect > between 5 to 10. OK, done! -- Thierry Carrez (ttx) From balazs.gibizer at ericsson.com Mon Feb 5 15:32:03 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 5 Feb 2018 16:32:03 +0100 Subject: [openstack-dev] [nova] Notification update week 6 Message-ID: <1517844723.7728.13@smtp.office365.com> Hi, Here is the status update / focus settings mail for w6. Bugs ---- No new bugs and the below bug status is the same as last week. [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to master. Backports have been proposed: * Pike: https://review.openstack.org/#/c/531745/ * Queens: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields We need to understand first how this can happen. Based on the comments from the bug it seems it happens after upgrading an old deployment. So it might be some problem with the online data migration that moves the flavor into the instance. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- The rocky bp has been created https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-rocky Every open patch needs to be reproposed to this bp as soon as master opens for Rocky. Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- A specless bp has been proposed to the Rocky cycle https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Some preliminary discussion happened in an earlier patch https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- A new bp has been proposed https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications As the user who initiates the instance action (e.g. reboot) could be different from the user owning the instance it would make sense to include the user_id and project_id of the action initiatior to the versioned instance action notifications as well. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open No open patches. We can expect some as soon as master opens for Rocky. Weekly meeting -------------- The next meeting will be held on 6th of February on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180206T170000 Cheers, gibi From thierry at openstack.org Mon Feb 5 15:34:59 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 5 Feb 2018 16:34:59 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> Message-ID: <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> Lance Bragstad wrote: > Colleen started a thread asking if there was a need for a baremetal/vm > group session [0], which generated quite a bit of positive response. Is > there still a possibility of fitting that in on either Monday or > Tuesday? The group is usually pretty large. > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html Yes, we can still allocate a 80-people room or a 30-people one. Let me know if you prefer Monday, Tuesday or both. -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Mon Feb 5 15:36:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 5 Feb 2018 15:36:46 +0000 (GMT) Subject: [openstack-dev] [api] Openstack API and HTTP caching In-Reply-To: References: Message-ID: On Mon, 5 Feb 2018, Fred De Backer wrote: > Therefore it is my opinion that the Openstack API (Nova in this case, but > equally valid for all other APIs) should be responsible to include proper > HTTP headers in their responses to either disallow caching of the response > or at least limit it's validity. Yeah, that is what should happen. We recently did it (disallow caching) for placement (http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/placement-cache-headers.html) but it probably needs to be done just about everywhere else. I'd suggest you create a bug (probably just a nova one for now, but make it general enough that it is easy to add other projects) an perhaps that will help get some traction. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From dtantsur at redhat.com Mon Feb 5 15:47:42 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Feb 2018 16:47:42 +0100 Subject: [openstack-dev] [ironic] driver composition: help needed from vendors Message-ID: Hi everyone, We have landed changes deprecating classic drivers, and we may remove classic drivers as early as end of Rocky. I would like to ask those who maintain drivers for ironic a few favors: 1. We have landed a database migration [1] to change nodes from classic drivers to hardware types automatically. Please check the mapping [2] for your drivers for correctness. 2. Please update your documentation pages to primarily use hardware types. You're free to still mention classic drivers or remove the information about them completely. 3. Please update your CI to use hardware types on master (queens and newer). Please make sure that the coverage does not suffer. For example, if you used to test pxe_foo and agent_foo, the updates CI should test "foo" hardware type with "iscsi" and "direct" deploy interfaces. Please let us know if you have any concerns. Thanks, Dmitry [1] https://review.openstack.org/534373 [2] https://review.openstack.org/539589 From coolsvap at gmail.com Mon Feb 5 15:56:16 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Mon, 5 Feb 2018 21:26:16 +0530 Subject: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days In-Reply-To: References: Message-ID: Hi David, Count me in. ~coolsvap On Mon, Feb 5, 2018 at 9:01 PM, David Moreau Simard wrote: > Hi everyone, > > We've started planning the deployment with the Kolla team, you can see > the etherpad from the "operator" perspective here: > https://etherpad.openstack.org/p/kolla-rdo-m3 > > We'll advertise the test days and how users can participate soon. > > Thanks, > > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard > wrote: > > Hi ! > > > > For those who might be unfamiliar with the RDO [1] community project: > > we hang out in #rdo, we don't bite and we build vanilla OpenStack > > packages. > > > > These packages are what allows you to leverage one of the deployment > > projects such as TripleO, PackStack or Kolla to deploy on CentOS or > > RHEL. > > The RDO community collaborates with these deployment projects by > > providing trunk and stable packages in order to let them develop and > > test against the latest and the greatest of OpenStack. > > > > RDO test days typically happen around a week after an upstream > > milestone has been reached [2]. > > The purpose is to get everyone together in #rdo: developers, users, > > operators, maintainers -- and test not just RDO but OpenStack itself > > as installed by the different deployment projects. > > > > We tried something new at our last test day [3] and it worked out great. > > Instead of encouraging participants to install their own cloud for > > testing things, we supplied a cloud of our own... a bit like a limited > > duration TryStack [4]. > > This lets users without the operational knowledge, time or hardware to > > install an OpenStack environment to see what's coming in the upcoming > > release of OpenStack and get the feedback loop going ahead of the > > release. > > > > We used Packstack for the last deployment and invited Packstack cores > > to deploy, operate and troubleshoot the installation for the duration > > of the test days. > > The idea is to rotate between the different deployment projects to > > give every interested project a chance to participate. > > > > Last week, we reached out to Kolla to see if they would be interested > > in participating in our next RDO test days [5] around February 8th. > > We supply the bare metal hardware and their core contributors get to > > deploy and operate a cloud with real users and developers poking > > around. > > All around, this is a great opportunity to get feedback for RDO, Kolla > > and OpenStack. > > > > We'll be advertising the event a bit more as the test days draw closer > > but until then, I thought it was worthwhile to share some context for > > this new thing we're doing. > > > > Let me know if you have any questions ! > > > > Thanks, > > > > [1]: https://www.rdoproject.org/ > > [2]: https://www.rdoproject.org/testday/ > > [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack- > queens-deployment/ > > [4]: http://trystack.org/ > > [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. > 2018-01-24-16.00.log.html > > > > David Moreau Simard > > Senior Software Engineer | OpenStack RDO > > > > dmsimard = [irc, github, twitter] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Feb 5 16:25:14 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 5 Feb 2018 11:25:14 -0500 Subject: [openstack-dev] [glance] glance-manage db check feature needs reviews Message-ID: Hello Glancers, Please take a look at Bhagyashri's patch, which was given a FFE. There's a slight deviation from the spec, so I need feedback about whether this is acceptable (spoiler alert: I think it's OK). So please comment on that aspect of the patch even if you don't have time at the moment to review the code thoroughly. See my comment on PS11 for details. thanks, brian From lbragstad at gmail.com Mon Feb 5 16:38:28 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 5 Feb 2018 10:38:28 -0600 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> Message-ID: <1627c084-b57d-ae35-3649-fa35979ebe8d@gmail.com> On 02/05/2018 09:34 AM, Thierry Carrez wrote: > Lance Bragstad wrote: >> Colleen started a thread asking if there was a need for a baremetal/vm >> group session [0], which generated quite a bit of positive response. Is >> there still a possibility of fitting that in on either Monday or >> Tuesday? The group is usually pretty large. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html > Yes, we can still allocate a 80-people room or a 30-people one. Let me > know if you prefer Monday, Tuesday or both. Awesome - we're collecting topics in an etherpad, but we're likely only going to get to three or four of them [0] [1]. We can work those topics into two sessions. One on Monday and one on Tuesday, just to break things up in case other things are happening those days that people want to get to. [0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg [1] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-05.log.html#t2018-02-05T15:45:57 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jlibosva at redhat.com Mon Feb 5 17:41:05 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Mon, 5 Feb 2018 18:41:05 +0100 Subject: [openstack-dev] [Neutron] Bug deputy report Message-ID: <04c8d32a-c1bc-11c0-6e18-f77d07318c19@redhat.com> Hi all, I was a bug deputy for the last week and I won't be attending today team meeting, so here comes my report: It was very calm, there were no critical bugs reported, some bugs were already fixed and other got attention and have patches up for review. Some bugs were also triaged and some closed as they were duplicates. The only one left is https://bugs.launchpad.net/neutron/+bug/1746707 where I'm not sure whether that's valid for reference implementation. It says there are inconsistency issues in NSX Neutron plugin and hence there *might* be issues in other plugins too. AFAIK ml2 has BEFORE_ and AFTER_ callbacks in combination with retry mechanisms performed over database. But I'm not brave to judge whether this is sufficient to be considered safe. Hence I marked the bug as incomplete but I think it deserves some discussion at the meeting. Thanks, Kuba From juliaashleykreger at gmail.com Mon Feb 5 18:12:00 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 5 Feb 2018 10:12:00 -0800 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core Message-ID: I would like to nominate Hironori Shiina to ironic-core. He has been working in the ironic community for some time, and has been helping over the past several cycles with more complex features. He has demonstrated an understanding of Ironic's code base, mechanics, and overall community style. His review statistics are also extremely solid. I personally have a great deal of trust in his reviews. I believe he would make a great addition to our team. Thanks, -Julia From ramamani.yeleswarapu at intel.com Mon Feb 5 18:49:58 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 5 Feb 2018 18:49:58 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== - Fix the multitenant grenade - Fix the ironic-tempest-plugin CI https://review.openstack.org/#/c/540355/ - CI and docs work for classic drivers deprecation (see status below) - Ansible deploy docs https://review.openstack.org/#/c/525501/ - Fix as many bugs as possible Bugs that we want to land in this release: 1. ironic - Don't try to lock upfront for vif removal: https://review.openstack.org/#/c/534441/ 2. handle glance images without data https://review.openstack.org/531180 3. rework exception handling on deploy https://review.openstack.org/531120 4. n-g-s: fix bind_port error https://review.openstack.org/#/c/540295/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Subproject priorities --------------------- bifrost: (TheJulia): Fedora support fixes - https://review.openstack.org/#/c/471750/ ironic-inspector (or its client): networking-baremetal: networking-generic-switch: - initial release note https://review.openstack.org/#/c/534201/ sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 15 Jan 2018 and 5 Feb 2018) - Ironic: 222 bugs (+6) + 247 wishlist items (-13). 1 new, 161 in progress (+5), 1 critical (+1), 34 high (+1) and 25 incomplete (-2) - Inspector: 14 bugs + 25 wishlist items (-3). 0 new, 12 in progress (+2), 0 critical, 2 high and 4 incomplete (-2) - Nova bugs with Ironic tag: 14 (+1). 1 new, 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 5 Feb 2017: - TODO: - API-SIG guideline on consuming versions in SDKs https://review.openstack.org/532814 on review - establish foundation for using version negotiation in nova - nothing more for Queens. Stay tuned... - need to make sure that we discuss/agree with nova about how to do this Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 05 Feb 2017: - dev documentation for hardware types: https://review.openstack.org/537959 - switch documentation to hardware types: - install and admin guides done - need help from vendors updating their pages! - api-ref examples: TODO - migration of classic drivers to hardware types: done - migration of CI to hardware types - ironic and inspector: done - inspector part caused problems in ironic-tempest-plugin CI :( - IPA: TODO - ironic-lib: TODO? - python-ironicclient: TODO? - python-ironic-inspector-client: TODO? - virtualbmc: TODO? - bifrost: https://review.openstack.org/#/c/540153/ proposed, CI presently broken for bifrost. - actual deprecation: done Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - status as of 5 Feb 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - traits API: - http://specs.openstack.org/openstack/ironic-specs/specs/approved/node-traits.html - ironic, ironicclient & nova patches have landed. - This is DONE Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 05 Feb 2017: - dtantsur is returning to this after the release - list of cases from the PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 5 Feb 2018: - Need reviews ... https://review.openstack.org/#/q/topic:bug/1658964+status:open - hjensas taken over as main contributor from sambetts - There is challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - https://review.openstack.org/456235 Add baremetal neutron agent (Merged) - https://review.openstack.org/#/c/533707/ start_flag = True, only first time, or conf change (Merged) - https://review.openstack.org/521838 Switch from MechanismDriver to SimpleAgentMechanismDriverBase MERGED - https://review.openstack.org/#/c/536040/ Flat networks use node.uuid when binding ports. MERGED - https://review.openstack.org/#/c/537353 Add documentation for baremetal mech MERGED - https://review.openstack.org/#/c/532349/ Add support to bind type vlan networks MERGED - https://review.openstack.org/524709 Make the agent distributed using hashring and notifications - CI Patches: - https://review.openstack.org/#/c/531275/ Devstack - use neutron segments (routed provider networks) MERGED - https://review.openstack.org/#/c/531637/ Wait for ironic-neutron-agent to report state MERGED - https://review.openstack.org/#/c/530117/ Devstack - Add ironic-neutron-agent MERGED - https://review.openstack.org/#/c/530409/ Add dsvm job MERGED - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation Rescue mode (rloo, stendulker, aparnav) --------------------------------------- - Status as on 5 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ - Devstack changes to enable testing add support for rescue mode: https://review.openstack.org/#/c/524118/ - We need to be careful with this, in that we can't use python-ironicclient changes that have not been released. - Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Can't Merge until we do a client release with rescue support (in Rocky): - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 5 Feb 2017: - code merged, CI coverage via the standalone job - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction Split away the tempest plugin (jlvillal) ---------------------------------------- - https://etherpad.openstack.org/p/ironic-tempest-plugin-migration - Current (5-Feb-2018) (jlvillal): All projects now using tempest plugin code from openstack/ironic-tempest-plugin - removed plugin code from master branch of openstack/ironic and openstack/ironic-inspector - Plugin code will NOT be removed from the stable branches of openstack/ironic and openstack/ironic-inspector - ironic-tempest-plugin 1.0.0 released - (jlvillal) I believe it is done. - rloo declares it DONE then :) Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Feb 5 19:03:24 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 5 Feb 2018 11:03:24 -0800 Subject: [openstack-dev] [ironic] Rocky PTL candidacy Message-ID: Hi Everybody! I am hereby announcing my candidacy and self nomination for the Rocky cycle ironic PTL position. I'm fairly certain most of you know me by this point and know how much I care about the community as well as our efforts to automate the deployment and configuration of baremetal infrastructure. For those of you who do not yet know me, I've been involved in OpenStack since the beginning of the Juno cycle, and have been working with the ironic community since the beginning of the Kilo cycle. I am very passionate about ironic, but I recognize that there is more work to be done, new directions to head in, and challenges to conquer. My vision is for ironic to be utilized in more use cases outside of what we have typically seen as our primary user. It is necessary to expand on existing relationships and to build new relationships going forward. My hope is for us to continue to grow as a community. While we have had set backs like all projects, we still have massive potential. Thank you for your consideration, Julia Kreger (TheJulia) From miguel at mlavalle.com Mon Feb 5 19:21:24 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 5 Feb 2018 13:21:24 -0600 Subject: [openstack-dev] [neutron] PTL candidacy for Rocky Message-ID: Hello OpenStack Community, I write this to submit my candidacy for the Neutron PTL position during the Rocky cycle. I had the privilege of being the project's PTL for most of the Queens release series and want to have another opportunity helping the team and the community to deliver more and better networking functionality. I have worked for the technology industry 37+ years. After many years in management, I decided to return to the "Light Side of the Force", the technical path, and during the San Diego Summit in 2012 told the Neutron (Quantum at the time) PTL that one day I wanted to be a member of the core team. He and the team welcomed me and that started the best period of my career, not only for the never ending learning experiences, but more importantly, for the many talented women and men that I have met along the way. Over these past few years I worked for Rackspace, helping them to deploy and operate Neutron in their public cloud, IBM in their Linux Technology Center, and currently for Huawei, as their Neutron upstream development lead. During the Queens release the team made significant progress in the following fronts: - Continued with the adoption of Oslo Versioned Objects in the DB layer - Implemented QoS rate limits for floating IPs - Delivered the FWaaS V2.0 API - Concluded the implementation of the logging API for security groups, which implements a way to capture and store events related to security groups. - Continued moving externally referenced items to neutron-lib and adopting them in Neutron and the Stadium projects - Welcomed VPNaaS back into the Stadium after the team put it back in shape - Improved team processes such as having a pre-defined weekly schedule for team members to act as bug triagers, gave W+ to additional core members in neutron-lib and re-scheduled the Neutron drivers meeting on alternate days and hours to enable attendance of more people across different time zones Some of the goals that I propose for the team to pursue during the Rocky cycle are: - Finish the implementation of multiple port binding to solve the migration between VIF types in a generic way so operators can switch easily between backends. This is a joint effort with the Nova team - Implement QoS minimum bandwidth allocation in the Placement API to support scheduling of instances based on the network bandwidth available in hosts. This is another joint effort with the Nova team - Synchronize the adoption of the DB layer engine facade with the adoption of Oslo Versioned Objects to avoid situations where they don't cooperate nicely - Implement port forwarding based on floating IPs - Continue moving externally referenced items to neutron-lib and adopting them in Neutron and the Stadium projects. Finish documenting extensions in the API reference. Start the move of generic DB functionality to the library - Expand the work done with the logging API in security groups to FWaaS v2.0 - Continue efforts in expanding our team and making its work easier. While we had some success during Queens, this is an area where we need to maintain our focus Thank you for your consideration and for taking the time to read this Miguel Lavalle (mlavalle) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Feb 5 19:51:04 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 05 Feb 2018 11:51:04 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <87h8r7qpo9.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Sat, 27 Jan 2018 07:36:38 -0800") References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <87h8r7qpo9.fsf@meyer.lemoncheese.net> Message-ID: <87zi4n2p1z.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > The reason is that, contrary to earlier replies in this thread, the > /#/c/ version of the change URL does not work. The /#/c/ form of Gerrit URLs should work now; if it doesn't, please let me know. I would still recommend (and personally plan to use) the other form -- it's very easy to end up with a URL in Gerrit which includes the patchset, or even a set of patchset diffs. Zuul will ignore this information and select the latest patchset of the change as its dependency. If a user clicks on a URL with an embedded patchset though, they may end up looking at an old version, and not the version that Zuul will use. At any rate, the /#/c/ form should work. I'd recommend trying to trim off anything past the change number, if you do use it, to avoid ambiguity. -Jim From alee at redhat.com Mon Feb 5 20:13:31 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 05 Feb 2018 15:13:31 -0500 Subject: [openstack-dev] [barbican] candidacy for PTL Message-ID: <1517861611.9647.57.camel@redhat.com> Fellow Barbicaneers, I'd like to nominate myself to serve as Barbican PTL through the Rocky cycle. Dave has done a great job at keeping the project growing and I'd like to continue his good work. This is an exciting time for Barbican. With more distributions and installers incorporating Barbican, and a renewed focus on meeting security and compliance requirements, deployers will be relying on Barbican to securely implement some of the use cases that we've been working on for the past few years (volume encryption, image signing, swift object encryption etc.). Moreover, work has been progressing in having castellan adopted as a base service for OpenStack applications - hopefully increasing the deployment of secure secret management across the board. In particular, for the Rocky cycle, I'd like to continue the progress made in Queens to: 1) Grow the Barbican team of contributors and core reviewers. 2) Help drive further collaboration with other Openstack projects with joint blueprints. 3) Help ensure that deployments are successful by keeping up on bugs fixes and backports. 4) Help develop new secret store plugins, in particular : -- a castallan secret store that will allow us to use vault and custodia backends. -- SGX? 5) Continue the stability and maturity enhancements. Thank you in advance for this opportunity to serve. --Ade Lee (alee) From lbragstad at gmail.com Mon Feb 5 20:18:52 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 5 Feb 2018 14:18:52 -0600 Subject: [openstack-dev] [keystone] [ptg] Rocky PTG planning In-Reply-To: References: Message-ID: <124908c5-f8ed-26a8-0a21-6af7ee36adb2@gmail.com> I've started working the topics we had into a rough schedule [0], and it's wide open for criticism and feedback. If you notice a conflict with another session, please leave a comment on the schedule or ping me. Also, if you think of something else that we should cover, we have several open slots and we can be flexible in shuffling things around. Thanks for taking a look! [0] https://etherpad.openstack.org/p/keystone-rocky-ptg On 01/03/2018 03:43 PM, Lance Bragstad wrote: > Hey all, > > It's about that time to start our pre-PTG planning activities. I've > started an etherpad and bootstrapped it with some basic content [0]. > Please take the opportunity to add topics to the schedule. It doesn't > matter if it is cross-project or keystone specific. The sooner we get > ideas flowing the easier it will be to coordinate cross-project tracks > with other groups. We'll organize the content into a schedule after a > couple week. Let me know if you have any questions. > > Thanks, > > Lance > > [0] https://etherpad.openstack.org/p/keystone-rocky-ptg > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Mon Feb 5 20:22:47 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 5 Feb 2018 14:22:47 -0600 Subject: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG? In-Reply-To: References: <50773bcf-ef48-c92c-4ebc-ef69cb658eb0@redhat.com> Message-ID: <6379debf-8181-48b4-3a47-0a7043adbd24@gmail.com> On 02/02/2018 11:56 AM, Lance Bragstad wrote: > I apologize for using the "baremetal/VM" name, but I wanted to get an > etherpad rolling sooner rather than later [0], since we're likely going > to have to decide on a new name in person. I ported the initial ideas > Colleen mentioned when she started this thread, added links to previous > etherpads from Boston and Denver, and ported some topics from the Boston > etherpads. > > Please feel free to add ideas to the list or elaborate on existing ones. > Next week we'll start working through them and figure out what we want > to accomplish for the session. Once we have an official room for the > discussion, I'll add the etherpad to the list in the wiki. Based on some discussions in #openstack-dev this morning [0], I took a stab at working out a rough schedule for Monday and Tuesday [1]. Let me know if you notice conflicts or want to re-propose a session/topic. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-05.log.html#t2018-02-05T15:45:57 [1] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg > > [0] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg > > > On 02/02/2018 11:10 AM, Zane Bitter wrote: >> On 30/01/18 10:33, Colleen Murphy wrote: >>> At the last PTG we had some time on Monday and Tuesday for >>> cross-project discussions related to baremetal and VM management. We >>> don't currently have that on the schedule for this PTG. There is still >>> some free time available that we can ask for[1]. Should we try to >>> schedule some time for this? >> +1, I would definitely attend this too. >> >> - ZB >> >>>  From a keystone perspective, some things we'd like to talk about with >>> the BM/VM teams are: >>> >>> - Unified limits[2]: we now have a basic REST API for registering >>> limits in keystone. Next steps are building out libraries that can >>> consume this API and calculate quota usage and limit allocation, and >>> developing models for quotas in project hierarchies. Input from other >>> projects is essential here. >>> - RBAC: we've introduced "system scope"[3] to fix the admin-ness >>> problem, and we'd like to guide other projects through the migration. >>> - Application credentials[4]: this main part of this work is largely >>> done, next steps are implementing better access control for it, which >>> is largely just a keystone team problem but we could also use this >>> time for feedback on the implementation so far >>> >>> There's likely some non-keystone-related things that might be at home >>> in a dedicated BM/VM room too. Do we want to have a dedicated day or >>> two for these projects? Or perhaps not dedicated days, but >>> planned-in-advance meeting time? Or should we wait and schedule it >>> ad-hoc if we feel like we need it? >>> >>> Colleen >>> >>> [1] >>> https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307&single=true >>> [2] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html >>> [3] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html >>> [4] >>> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From vdrok at mirantis.com Mon Feb 5 20:31:57 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Mon, 5 Feb 2018 12:31:57 -0800 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: +1 On Mon, Feb 5, 2018 at 10:12 AM, Julia Kreger wrote: > I would like to nominate Hironori Shiina to ironic-core. He has been > working in the ironic community for some time, and has been helping > over the past several cycles with more complex features. He has > demonstrated an understanding of Ironic's code base, mechanics, and > overall community style. His review statistics are also extremely > solid. I personally have a great deal of trust in his reviews. > > I believe he would make a great addition to our team. > > Thanks, > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Feb 5 21:11:23 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 5 Feb 2018 14:11:23 -0700 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <871si4czfe.fsf@meyer.lemoncheese.net> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Feb 1, 2018 at 11:55 AM, James E. Blair wrote: > Zane Bitter writes: > >> Yeah, it's definitely nice to have that flexibility. e.g. here is a >> patch that wouldn't merge for 3 months because the thing it was >> dependent on also got proposed as a backport: >> >> https://review.openstack.org/#/c/514761/1 >> >> From an OpenStack perspective, it would be nice if a Gerrit ID implied >> a change from the same Gerrit instance as the current repo and the >> same branch as the current patch if it exists (otherwise any branch), >> and we could optionally use a URL instead to select a particular >> change. > > Yeah, that's reasonable, and it is similar to things Zuul does in other > areas, but I think one of the thing we want to do with Depends-On is > consider that Zuul isn't the only audience. It's there just as much for > the reviewers, and other folks. So when it comes to Gerrit change ids, > I feel we had to constrain it to Gerrit's own behavior. When you click > on one of those in Gerrit, it shows you all of the changes across all of > the repos and branches with that change-id. So that result list is what > Zuul should work with. Otherwise there's a discontinuity between what a > user sees when they click the hyperlink under the change-id and what > Zuul does. > > Similarly, in the new system, you click the URL and you see what Zuul is > going to use. > > And that leads into the reason we want to drop the old syntax: to make > it seamless for a GitHub user to know how to Depends-On a Gerrit change, > and vice versa, with neither requiring domain-specific knowledge about > the system. > While I can appreciate that, having to manage urls for backports in commit messages will lead to missing patches and other PEBAC related problems. Perhaps rather than throwing out this functionality we can push for improvements in the gerrit interaction itself? I'm really -1 on removing the change-id syntax just for this reasoning. The UX of having to manage complex depends-on urls for things like backports makes switching to URLs a non-starter unless I have a bunch of external system deps (and I generally don't). Thanks, -Alex > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ruby.loo at intel.com Mon Feb 5 22:24:41 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Mon, 5 Feb 2018 22:24:41 +0000 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: <1C95F8B0-5E3D-4466-A09F-D512B57FAF74@intel.com> +1 from me. He's been really helpful with the boot-from-volume and rescue work. Looking forward to Hironori joining us :) Thanks Julia, for bringing this up! --ruby On 2018-02-05, 1:12 PM, "Julia Kreger" wrote: I would like to nominate Hironori Shiina to ironic-core. He has been working in the ironic community for some time, and has been helping over the past several cycles with more complex features. He has demonstrated an understanding of Ironic's code base, mechanics, and overall community style. His review statistics are also extremely solid. I personally have a great deal of trust in his reviews. I believe he would make a great addition to our team. Thanks, -Julia __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ruby.loo at intel.com Mon Feb 5 22:42:04 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Mon, 5 Feb 2018 22:42:04 +0000 Subject: [openstack-dev] [ironic] team dinner at Dublin PTG? Message-ID: <0363716E-BD26-4C72-900C-6B411B211C72@intel.com> Hi ironic-ers, Planning for the Dublin PTG has started. And what's the most important thing (and most fun event) to plan for? You got it, the team dinner! We'd like to get an idea of who is interested and what evening works for all or most of us. Please indicate which evenings you are available, at this doodle: https://doodle.com/poll/d4ff6m9hxg887n9q If you're shy or don't want to use doodle, send me an email. Please respond by Friday, Feb 16 (same deadline as PTG topics-for-discussion), so we can find a place and reserve it. Thanks! --ruby From jaypipes at gmail.com Tue Feb 6 00:33:20 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 5 Feb 2018 19:33:20 -0500 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> Message-ID: <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> Goutham, comments inline... Also, FYI, using HTML email with different color fonts to indicate different people talking is not particularly mailing list-friendly. For reasons why, just check out your last post: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126842.html You can't tell who is saying what in the mailing list post... Much better to use non-HTML email and demarcate responses with the traditional > marker. :) OK, comments inline below. On 01/31/2018 01:17 PM, Goutham Pratapa wrote: > Hi Jay, > > Thanks for the questions.. :) > > What precisely do you mean by "resources" above ?? > > Resources as-in resources required to boot-up a vm (Keypair, Image, > Flavors ) Gotcha. Thanks for the answer. > Also, by "syncing", do you mean "replicating"? The reason I ask is > because in the case of, say, VM "resources", you can't "sync" a VM > across regions. You can replicate its bootable image, but you can't > "sync" a VM's state across multiple OpenStack deployments. > > Yes as you said syncing as-in replicating only. Gotcha. You could, of course, actually use synchronous (or semi-sync) replication for various databases, including Glance and Keystone's identity/assignment information, but yes, async replication is just as good. > and yes we cannot sync vm's across regions but our idea is to > sync/replicate all the parameters required to boot a vm OK, sounds good. > (viz. *image, keypair, flavor*) which are originally there in the source > region to the target regions in a single-go. Gotcha. Some questions on scope that piqued my interest while reading your response... Is Kingbird predominantly designed to be the multi-region orchestrator for OpenStack deployments that are all owned/operated by the same deployer? Or does Kingbird have intentions of providing glue services between multiple fully-independent OpenStack deployments (possibly operated by different deployers)? Further, does Kingbird intend to get into the multi-cloud (as in AWS, OpenStack, Azure, etc) orchestration game? > I'm curious what you mean by "resource management". Could you elaborate > a bit on this? > > Resource management as-in managing the resources i.e say a user has a > glance image(*qcow2 or ami format*) or > say flavor(*works only if admin*) with some properties or keypair > present in one source regionand he wants the same image or > same flavor with same properties or the same keypair in another set of > regions user may have to recreate them in all target regions. > > But with the help of kingbird you can do all the operations in a single go. > > --> If user wants to sync a resource of type keypair he can replicate > the keypair into multiple target regions in single go (similarly glance > images and flavors ) > --> If user wants different type of resource( keypair,image and flavor) > in a single go then user can  give a yaml file as input and kingbird > replicates all resources in all target regions OK, I understand your use case here, thanks. It does seem to me, however, that if the intention is *not* to get into the multi-cloud orchestration game, that a simpler solution to this multi-region OpenStack deployment use case would be to simply have a global Glance and Keystone infrastructure that can seamlessly scale to multiple regions. That way, there'd be no need for replicating anything. I suppose what I'm recommending it that instead of the concept of a region (or availability zone in Nova for that matter) being a mostly-configuration option thing, that the OpenStack contributor community actually work to make regions (the concept that Keystone labels a region; which is just a grouping of service endpoints) the one and only concept of a user-facing "partition" throughout OpenStack. That way we would have OpenStack services like Glance, Nova, Cinder, Neutron, etc just *natively* understand which region they are in and how/if they can communicate with other regions. Sometimes it seems we (as a community) go through lots of hoops working around fundamental architectural problems in OpenStack instead of just fixing those problems to begin with. See: Nova cellsv1 (and some of cellsv2), Keystone federation, the lack of a real availability zone concept anywhere, Nova shelve/unshelve (partly developed because VMs and IPs were too closely coupled at the time), the list goes on and on... Anyway, mostly just rambling/ranting... just food for thought. Best, -jay > Thanks > Goutham. > > On Wed, Jan 31, 2018 at 9:25 PM, Jay Pipes > wrote: > > On 01/31/2018 01:49 AM, Goutham Pratapa wrote: > > *Kingbird (The Multi Region orchestrator):* > > We are proud to announce kingbird is not only a centralized > quota and resource-manager but also a  Multi-region Orchestrator. > > *Use-cases covered: > > *- Admin can synchronize and periodically balance quotas across > regions and can have a global view of quotas of all the tenants > across regions. > - A user can sync a resource or a group of resources from one > region to other in a single go > > > What precisely do you mean by "resources" above? > > Also, by "syncing", do you mean "replicating"? The reason I ask is > because in the case of, say, VM "resources", you can't "sync" a VM > across regions. You can replicate its bootable image, but you can't > "sync" a VM's state across multiple OpenStack deployments. > >   A user can sync multiple key-pairs, images, and flavors from > one region to other, ( Flavor can be synced only by admin) > > - A user must have complete tempest test-coverage for all the > scenarios/services rendered by kingbird. > > - Horizon plugin so that user can access/view global limits. > > * Our Road-map:* > > -- Automation scripts for kingbird in >      -ansible, >      -salt >      -puppet. > -- Add SSL support to kingbird > -- Resource management in Kingbird-dashboard. > > > I'm curious what you mean by "resource management". Could you > elaborate a bit on this? > > Thanks, > -jay > > -- Kingbird in a docker > -- Add Kingbird into Kolla. > > We are looking out for*_contributors and ideas_* which can > enhance Kingbird and make kingbird a one-stop solution for all > multi-region problems > > > > *_Stable Branches :_ > * > * > Kingbird-server: > https://github.com/openstack/kingbird/tree/stable/queens > > > > * > *Python-Kingbird-client (0.2.1): > https://github.com/openstack/python-kingbirdclient/tree/0.2.1 > > > > * > > I would like to Thank all the people who have helped us in > achieving this milestone and guided us all throughout this > Journey :) > > Thanks > Goutham Pratapa > PTL > OpenStack-Kingbird. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Cheers !!! > Goutham Pratapa From mriedemos at gmail.com Tue Feb 6 03:00:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Feb 2018 21:00:42 -0600 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> Given the size and detail of this thread, I've tried to summarize the problems and possible solutions/workarounds in this etherpad: https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu For those working on this, please check that what I have written down is correct and then we can try to make some kind of plan for resolving this. On 1/16/2018 3:24 PM, melanie witt wrote: > Hello Stackers, > > This is a heads up to any of you using the AggregateCoreFilter, > AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. > These filters have effectively allowed operators to set overcommit > ratios per aggregate rather than per compute node in <= Newton. > > Beginning in Ocata, there is a behavior change where aggregate-based > overcommit ratios will no longer be honored during scheduling. Instead, > overcommit values must be set on a per compute node basis in nova.conf. > > Details: as of Ocata, instead of considering all compute nodes at the > start of scheduler filtering, an optimization has been added to query > resource capacity from placement and prune the compute node list with > the result *before* any filters are applied. Placement tracks resource > capacity and usage and does *not* track aggregate metadata [1]. Because > of this, placement cannot consider aggregate-based overcommit and will > exclude compute nodes that do not have capacity based on per compute > node overcommit. > > How to prepare: if you have been relying on per aggregate overcommit, > during your upgrade to Ocata, you must change to using per compute node > overcommit ratios in order for your scheduling behavior to stay > consistent. Otherwise, you may notice increased NoValidHost scheduling > failures as the aggregate-based overcommit is no longer being > considered. You can safely remove the AggregateCoreFilter, > AggregateRamFilter, and AggregateDiskFilter from your enabled_filters > and you do not need to replace them with any other core/ram/disk > filters. The placement query takes care of the core/ram/disk filtering > instead, so CoreFilter, RamFilter, and DiskFilter are redundant. > > Thanks, > -melanie > > [1] Placement has been a new slate for resource management and prior to > placement, there were conflicts between the different methods for > setting overcommit ratios that were never addressed, such as, "which > value to take if a compute node has overcommit set AND the aggregate has > it set? Which takes precedence?" And, "if a compute node is in more than > one aggregate, which overcommit value should be taken?" So, the > ambiguities were not something that was desirable to bring forward into > placement. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Matt From jungleboyj at gmail.com Tue Feb 6 04:23:15 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 5 Feb 2018 22:23:15 -0600 Subject: [openstack-dev] [cinder][ptl] Rocky PTL Candidacy ... Message-ID: <1d950ba6-8133-0861-48e2-18ffc04230d1@gmail.com> All, This note is to declare my candidacy for the Cinder, Rocky PTL position. I can't believe that Queens is already drawing to a close and that I have been PTL for a whole release already.  I have enjoyed this new challenge and learned so much more about OpenStack as a result of being able to serve as a PTL.  It has grown not only my understanding of Cinder but of OpenStack and what it has to offer our end users. I feel that the Queens release has gone smoothly and as I have been looking at the notes from the Queens PTG I think we have been successful at addressing many of the goals that the team set back in Denver.  We have been successful in getting the development team to take ownership of our documentation.  We have focused on fixing bugs in Cinder and improving our existing functions as I had hoped we would be able to do.  We have even seen some return to growth in the development team.  All momentum that I would like to see us maintain. So, I hope that you all will give me a chance to apply what I have learned during the last 6 months by supporting me in another term as Cinder's PTL. Sincerely, Jay Bryant (jungleboyj) From dougal at redhat.com Tue Feb 6 09:10:04 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 6 Feb 2018 09:10:04 +0000 Subject: [openstack-dev] [Mistral][PTL] PTL candidacy for Rocky Message-ID: I am announcing my candidacy for Mistral PTL for the Rocky release cycle. If you don't know me, I am d0ugal on Freenode. I have been working full time on OpenStack since the Icehouse release cycle in 2014. I started to contribute to Mistral in 2016 and joined the core team later that year. Since then I have been dedicating more of my time to the Mistral project. I am also a core in TripleO, which relies on Mistral. I am employed by Red Hat. Mistral has consistently been improving at a steady pace under the leadership of Renat, the current PTL, with well defined cycle goals. I hope to continue this work and focus our efforts in the following areas: * CI and testing In the Queens cycle we made some key improvements here, we enabled voting on the devstack CI jobs and transitioned to zuulv3 but we still have work to do. The Tempest jobs can still be unstable and only exercise small portions of the API. The coverage jobs have remained non-voting as they are unstable. We don't test database migrations. * Documentation and Onboarding I would like to put a stronger focus on documentation, to make the Mistral onboarding process easier for new users, operators and contributors. Mistral has proven itself to be very powerful and useful but I think we need to make it easier and more attractive to new users. This will likely require an overhaul of the documentation and a stricter requirement of documentation for changes and additions. * Further work on mistral-extra mistral-extra will provide a library of Actions that will let workflow authors easily integrate with more services and tools. In Queens we made good progress with mistral-lib, a new library for writing actions. In Rocky I would like to see more progress with mistral-extra. The first addition is likely to be Ansible integration and the relocation of the OpenStack actions from the main Mistral repo. This work will increase Mistrals utility and lower the barrier to entry for new workflow authors. * Consistency, Stability and HA Some components, like the event engine have been added without HA taken into consideration. I would like to see us resolve these and set a higher standard for further additions to avoid this problem returning. The cron triggers subsystem also doesn't meet the quality standard we should expect - enabling it creates high load and it requires refactoring. These are some of my personal goals and ideas. However, I see the PTL role as much about coordination and collaboration. This is why I believe a focus on onboarding, documentation and stability would be best for the project. I hope to incorporate ideas from other community members and help everyone work more efficiently. I would love to speak to more new users and contributors. You can reach out to me directly or find me in #openstack-mistral. Related patch to openstack/election: https://review.openstack.org/#/c/541191/ Thanks, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Feb 6 09:19:13 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 6 Feb 2018 09:19:13 +0000 Subject: [openstack-dev] [openstack-ansible] Roles not passing functional tests Message-ID: Dear community, We are everyday closer to the end of the cycle, and the following roles seem not ready for integration into the next release, because they are failing their own gates's functional testing for more than two weeks: - os_ceilometer - os_aodh - os_trove - os_magnum Following the guidelines of the role maturity downgrade procedure [1], I am sending you a list of bugs that needs fixing: - magnum [2] - trove [3] - aodh [4] - ceilometer [5] According to the same guidelines, if nobody fixes those bugs until the next community meeting, those roles's maturity index will be moved to unmaintained, and will eventually be removed from release. Thank you for your understanding, Jean-Philippe Evrard (evrardjp) [1]: https://docs.openstack.org/openstack-ansible/latest/contributor/additional-roles.html#maturity-downgrade-procedure [2]: https://bugs.launchpad.net/openstack-ansible/+bug/1747607 [3]: https://bugs.launchpad.net/openstack-ansible/+bug/1747608 [4]: https://bugs.launchpad.net/openstack-ansible/+bug/1747610 [5]: https://bugs.launchpad.net/openstack-ansible/+bug/1747612 From mahati.chamarthy at gmail.com Tue Feb 6 10:03:42 2018 From: mahati.chamarthy at gmail.com (Mahati C) Date: Tue, 6 Feb 2018 15:33:42 +0530 Subject: [openstack-dev] Call for mentors and funding - Outreachy May to Aug 2018 internships Message-ID: Hello everyone, We have an update on the Outreachy program, including a request for volunteer mentors and funding. For those of you who are not aware, Outreachy helps people from underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream community. For more info, please visit: https://wiki.openstack.org/wiki/Outreachy OpenStack is participating in the Outreachy May 2018 to August 2018 internships. The application period opens on February 12th. As the OpenStack PTG is around the corner, I understand many of you might be busy preparing for that. But putting in your project idea as soon as possible will help prospective interns to start working on their application. Plus, it's now a requirement to have at least one project idea submitted on the Outreachy website for OpenStack to show up under the current internship round. Interested mentors - please publish your project ideas on this page https://www.outreachy.org/communities/cfp/openstack/submit-project/. Here is a link that helps you get acquainted with mentorship process: https://wiki.openstack.org/wiki/Outreachy/Mentors We are also looking for additional sponsors to help support the increase in OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which is used to provide them a stipend for the three-month program. You can learn more about sponsorship here: https://www.outreachy.org/sponsor/ . Outreachy has been one of the most important and effective diversity efforts we’ve invested in. We have had many interns turn into long term OpenStack contributors. Please help spread the word. If you are interested in becoming a mentor or sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or Victoria (victoria AT redhat.com). Thank you! Best, Mahati -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Tue Feb 6 10:48:27 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Tue, 6 Feb 2018 18:48:27 +0800 Subject: [openstack-dev] [nova] Should we get auth from context for Neutron endpoint? Message-ID: Hi Nova, While doing some test with my newly deployed devstack env today, it turns out that the default devstack deployment cannot cleanup networks after the retry attempt exceeded. This is because in the deployment with super-conductor and cell-conductor, the retry and cleanup logic is in cell-conductor [1], and by default the devstack didn't put Neutron endpoint info in nova_cell1.conf. And as the neutron endpoint is also not included in the context [2], so we can't find Neutron endpoint when try to cleanup network [3]. The solution is simple though, ether add Neutron endpoint info in nova_cell1.conf in devstack or change Nova code to support get auth from context. I think the latter one is better as in real deployment there could be many cells and by doing this can ignore config it all the time. Any particular consideration that Neutron is not included in [2]? Suggestions on how this should be fixed? I also registered a devstack bug to fix it in devstack [4]. [1] https://github.com/openstack/nova/blob/bccf26c93a973d000e4339843ce9256814286d10/nova/conductor/manager.py#L604 [2] https://github.com/openstack/nova/blob/9519601401ee116a9197fe3b5d571495a96912e9/nova/context.py#L121 [3] https://bugs.launchpad.net/nova/+bug/1747600 [4] https://bugs.launchpad.net/devstack/+bug/1747598 BR, Kevin Zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Tue Feb 6 11:53:17 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 06 Feb 2018 11:53:17 +0000 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day Message-ID: Hi folks, as we discussed in our last meeting, tomorrow (Wednesday 7th) we are going to do a first overview of our documentation in order to gather the maximum information of where we need to fix, add or remove stuff so we don't waste time at PTG on this. We can fix small problems but the main goal is to have a list of places that need fixing, so we can use during Rocky cycle as a guide for documentation improvement. In order to maiximize our reach it would good to split where each of us will be looking at the documentation, so I thought: From: https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since user guide seems to be biggest one person would work on User Guide, and other 3 each work on 2 topics (we can choose freely) From: https://docs.openstack.org/sahara-tests/latest/ it seems very direct, maybe tosky and I can take a look at it (everyone is free to do so as well) If anyone needs help, or the rabbit hole grows too much we can always reorganize the split. What do you think of it? Thanks -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Feb 6 12:31:54 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 6 Feb 2018 12:31:54 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-06 Message-ID: HTML: https://anticdent.org/tc-report-18-06.html Nothing revolutionary in the past week of Technical Committee discussion. At least not that I witnessed. If there's a revolutionary cabal somewhere, pretty please I'd like to be a part of it. Main activity is (again) related to [openstack-wide goals](https://governance.openstack.org/tc/goals/index.html) and preparing for the [PTG](https://www.openstack.org/ptg) in Dublin. ## PTG Planning The [schedule](https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307) has mostly solidified. There are some etherpads related to [post lunch discussions](https://etherpad.openstack.org/p/dublin-PTG-postlunch), including a specific one for [Monday](https://etherpad.openstack.org/p/dublin-PTG-postlunch-monday). See the [irc](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-01.log.html#t2018-02-01T15:04:17) [logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-06.log.html#t2018-02-06T09:23:36) for more context. I've seen a fair number of people saying things like "since the PTG is coming up soon, let's talk about this there." Given how rare the face to face meetups are, I would hope that we could orient the time for talking about those things which are only (or best) talked about in person and keep the regular stuff in email. Long term planning, complex knowledge sharing, and conflict resolution are good candidates; choosing the color of the shed, not so much. The PTG is expected to sell out; [nine tickets left this morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-06.log.html#t2018-02-06T09:43:33). ## Feedback Loops Monday had a broad ranging [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-05.log.html#t2018-02-05T13:51:13) about gaps in the feedback loop, notably feedback from users who have as their primary point of contact their vendor. There was some sentiment of "we do a lot to try to make this happen, at a certain point we need to move forward with what we've got and trust ourselves". As all conversations eventually do, this led to talk of LTS and whether [renaming branches](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-05.log.html#t2018-02-05T15:54:30) might be helpful. The eventually decision was more trouble than it was worth. If you have some feedback you'd like to make, or something you think needs to be discussed at the PTG, please show up to [office hours](https://governance.openstack.org/tc/#office-hours), send some email, or write something on one of the many PTG etherpads that are brewing. Thank you. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From a.chadin at servionica.ru Tue Feb 6 12:56:06 2018 From: a.chadin at servionica.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTS?=) Date: Tue, 6 Feb 2018 12:56:06 +0000 Subject: [openstack-dev] [watcher] Alex Chadin candidacy for Watcher Message-ID: <3CD67861-7F8F-417C-A40C-DF92D482D925@servionica.ru> Hello, This is my candidacy to continue my work as the Watcher PTL for the Rocky cycle. I've been working on Watcher since fall of 2015 and am honored to lead this project during last two cycles. Watcher will get baremetal and storage supports with Queens release, along with new strategies and new features. New strategy restrictions and modifications would allow Watcher to execute strategies in coherence with other openstack services. Watcher's actions should be predictable and reliable in any time. There are some works we need to take into account: * Improve security by providing unified API validation way. * Extend set of notifications that Watcher consumes. * Provide selectors that would help users to choose the best way to achieve objections. Along with strategies maintenance, we expand set of supporting resources and projects. I'm happy to welcome new contributors on the project and ready to answer any question on our channel #openstack-watcher. Alexander Chadin (alexchadin) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From dtantsur at redhat.com Tue Feb 6 13:28:20 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 6 Feb 2018 14:28:20 +0100 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: +1 On 02/05/2018 07:12 PM, Julia Kreger wrote: > I would like to nominate Hironori Shiina to ironic-core. He has been > working in the ironic community for some time, and has been helping > over the past several cycles with more complex features. He has > demonstrated an understanding of Ironic's code base, mechanics, and > overall community style. His review statistics are also extremely > solid. I personally have a great deal of trust in his reviews. > > I believe he would make a great addition to our team. > > Thanks, > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From james.slagle at gmail.com Tue Feb 6 13:45:45 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 6 Feb 2018 08:45:45 -0500 Subject: [openstack-dev] [TripleO][ui] config-download (ansible deployment) by default in Rocky Message-ID: One of the things I hope to accomplish in Rocky is to switch to the config-download[1] ansible driven deployment mechanism by default. We made a lot of great progress during Queens, including switching over several of our CI jobs such as containers-multinode and ovb-ha (among a few others). I have filed several blueprints to track the remaining work. They can all be seen linked as related dependencies from the final task: https://blueprints.launchpad.net/tripleo/+spec/non-config-download-deprecate The final task is to deprecate the non-config-download deploy mechanism. This is critical because we only want to be supporting a single mechanism, so we really need to switch to config-download as the new default *and* deprecate the previous method. I'll briefly list the other blueprints: UI support: https://blueprints.launchpad.net/tripleo/+spec/config-download-ui ceph-ansible to use external_deploy_tasks: https://blueprints.launchpad.net/tripleo/+spec/ceph-ansible-external-deploy-tasks octavia to use external_deploy_tasks: https://blueprints.launchpad.net/tripleo/+spec/octavia-external-deploy-tasks skydive to use external_deploy_tasks: https://blueprints.launchpad.net/tripleo/+spec/skydive-external-deploy-tasks deprecate workflow_tasks: https://blueprints.launchpad.net/tripleo/+spec/deprecate-workflow-tasks (there will probably be others) During Rocky, it will probably be helpful to organize our work around a squad, with an etherpad to help track and report high level status. I'll set this up this week. If you plan to participate with this work, please let me know. [1] https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ansible_config_download.html -- -- James Slagle -- From openstack at fried.cc Tue Feb 6 14:00:54 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 6 Feb 2018 08:00:54 -0600 Subject: [openstack-dev] [nova] Should we get auth from context for Neutron endpoint? In-Reply-To: References: Message-ID: Zheng 先生- I *think* you're right that 'network' should be included in [2]. I can't think of any reason it shouldn't be. Does that fix the problem by itself? I believe the Neutron API code is already getting its auth from context... sometimes [5]. If you want to make sure it's an admin token, add admin=True here [6] - but that may have further-reaching implications. [5] https://github.com/openstack/nova/blob/9519601401ee116a9197fe3b5d571495a96912e9/nova/network/neutronv2/api.py#L155 [6] https://github.com/openstack/nova/blob/9519601401ee116a9197fe3b5d571495a96912e9/nova/network/neutronv2/api.py#L1190 Good luck. efried On 02/06/2018 04:48 AM, Zhenyu Zheng wrote: > Hi Nova, > > While doing some test with my newly deployed devstack env today, it > turns out that the default devstack deployment cannot cleanup networks > after the retry attempt exceeded. This is because in the deployment with > super-conductor and cell-conductor, the retry and cleanup logic is in > cell-conductor [1], and by default the devstack didn't put Neutron > endpoint info in nova_cell1.conf. And as the neutron endpoint is also > not included in the context [2], so we can't find Neutron endpoint when > try to cleanup network [3]. > > The solution is simple though, ether add Neutron endpoint info in > nova_cell1.conf in devstack or change Nova code to support get auth from > context. I think the latter one is better as in real deployment there > could be many cells and by doing this can ignore config it all the time. > > Any particular consideration that Neutron is not included in [2]? > > Suggestions on how this should be fixed? > > I also registered a devstack bug to fix it in devstack [4]. > > [1] https://github.com/openstack/nova/blob/bccf26c93a973d000e4339843ce9256814286d10/nova/conductor/manager.py#L604 > [2] https://github.com/openstack/nova/blob/9519601401ee116a9197fe3b5d571495a96912e9/nova/context.py#L121 > [3] https://bugs.launchpad.net/nova/+bug/1747600 > [4] https://bugs.launchpad.net/devstack/+bug/1747598 > > BR, > > Kevin Zheng > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Tue Feb 6 14:44:37 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 6 Feb 2018 06:44:37 -0800 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: TC voted (but not approved yet) and selected 2 goals that will likely be approved if no strong voice is raised this week: Remove mox https://review.openstack.org/#/c/532361/ Toggle the debug option at runtime https://review.openstack.org/#/c/534605/ If you have any comment on these 2 selected goals, please say it now otherwise TC will approve it and we'll discuss about details at the PTG. Thanks, On Wed, Jan 17, 2018 at 5:43 PM, Emilien Macchi wrote: > On Wed, Jan 17, 2018 at 4:04 AM, Erno Kuvaja wrote: > [...] > > Looking the current contributor base and momentum on Glance, I'd say > > we would fail to catch up with most of these. I think we've got rid of > > mox already and I'm not exactly sure how the mutable config goal > > aligns with the Glance's ability to reload configs in flight, so those > > two might be doable, based on the amount of bikeshedding needed for > > any API related change I'd say the pagination link would probably be > > least likely done before Unicorn release. > > > > - Jokke > > If mox is already done, consider also that you already have the cold > upgrade tag, so you wouldn't have anything to do in the cycle (if we > go with these 2 goals). > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsaienko at mirantis.com Tue Feb 6 15:03:27 2018 From: vsaienko at mirantis.com (Vasyl Saienko) Date: Tue, 6 Feb 2018 17:03:27 +0200 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: +1 On Mon, Feb 5, 2018 at 8:12 PM, Julia Kreger wrote: > I would like to nominate Hironori Shiina to ironic-core. He has been > working in the ironic community for some time, and has been helping > over the past several cycles with more complex features. He has > demonstrated an understanding of Ironic's code base, mechanics, and > overall community style. His review statistics are also extremely > solid. I personally have a great deal of trust in his reviews. > > I believe he would make a great addition to our team. > > Thanks, > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Tue Feb 6 15:21:08 2018 From: james.page at ubuntu.com (James Page) Date: Tue, 06 Feb 2018 15:21:08 +0000 Subject: [openstack-dev] [charms] PTL for Rocky Message-ID: Hi All It will (probably) come as no surprise that I'd like to announce my candidacy for PTL of OpenStack Charms [0]! We've made some good progress in the last cycle with some general housekeeping across the charms set, including removal of untested and generally unused database and messaging configurations. We've also finally managed to complete the deprecation of the Ceph charm with a well documented migration path to the newer Charms for operators to use. This is all great but we still have more housekeeping todo! Specifically we need to complete migration to using Python 3 as the default execution environment for charms (this was started during Queens, but is not yet complete). I'd like to see more depth in the networking configurations and choices the charms present (we already have specs raised for Dynamic Routing and Network Segment support) and I think these will appeal to operators with more complex networking requirements for OpenStack. I think we also need to finish the work we started last year on improving the Telemetry storage; Aodh, Gnocchi and Ceilometer are all looking in pretty good shape now, but we need to add Panko to the fold! I still think we have a bit of an issue with level of entry to writing a charm - it turns out that writing a charm is dead easy; writing unit tests is also pretty easy and familiar with anyone who writes any amount of Python; enabling full functional testing of a charm is much harder. Our historic tool choice (amulet) does not help in this area and I look forward to working with the dev team this cycle to move us onto something that's a) more directly maintainable and b) easier to engage with as we bring new charms and features onboard. I look forward to helping steer the project during the Rocky cycle! Cheers James [0] https://review.openstack.org/541306 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Feb 6 15:28:56 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 06 Feb 2018 16:28:56 +0100 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day In-Reply-To: References: Message-ID: <4653066.MMaySpfgC9@whitebase.usersys.redhat.com> On Tuesday, 6 February 2018 12:53:17 CET Telles Nobrega wrote: > Hi folks, > > as we discussed in our last meeting, tomorrow (Wednesday 7th) we are going > to do a first overview of our documentation in order to gather the maximum > information of where we need to fix, add or remove stuff so we don't waste > time at PTG on this. > > We can fix small problems but the main goal is to have a list of places > that need fixing, so we can use during Rocky cycle as a guide for > documentation improvement. Of course I think that if we find that something not trivial can be fixed anyway, no one is going to complain. > > In order to maiximize our reach it would good to split where each of us > will be looking at the documentation, so I thought: > > From: > https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since > user guide seems to be biggest one person would work on User Guide, and > other 3 each work on 2 topics (we can choose freely) I can help with this, but I will start with sahara-tests (see below); during the day, if no one else is working on other parts, I will start reading some of those other documents. > > From: > https://docs.openstack.org/sahara-tests/latest/ it seems very direct, maybe > tosky and I can take a look at it (everyone is free to do so as well) I will focus on this at first. > > If anyone needs help, or the rabbit hole grows too much we can always > reorganize the split. > > What do you think of it? I would say: let's start reading, and maybe we can find that there are some topics that can be covered throughout all the documents: reference to technologies that are no more valid, or the reference to the pre-built images, etc, and fix them first. Let's see. Ciao -- Luigi From tenobreg at redhat.com Tue Feb 6 15:32:44 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 06 Feb 2018 15:32:44 +0000 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day In-Reply-To: <4653066.MMaySpfgC9@whitebase.usersys.redhat.com> References: <4653066.MMaySpfgC9@whitebase.usersys.redhat.com> Message-ID: Sounds great Luigi, thanks On Tue, Feb 6, 2018 at 12:31 PM Luigi Toscano wrote: > On Tuesday, 6 February 2018 12:53:17 CET Telles Nobrega wrote: > > Hi folks, > > > > as we discussed in our last meeting, tomorrow (Wednesday 7th) we are > going > > to do a first overview of our documentation in order to gather the > maximum > > information of where we need to fix, add or remove stuff so we don't > waste > > time at PTG on this. > > > > We can fix small problems but the main goal is to have a list of places > > that need fixing, so we can use during Rocky cycle as a guide for > > documentation improvement. > > Of course I think that if we find that something not trivial can be fixed > anyway, no one is going to complain. > > > > > In order to maiximize our reach it would good to split where each of us > > will be looking at the documentation, so I thought: > > > > From: > > https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since > > user guide seems to be biggest one person would work on User Guide, and > > other 3 each work on 2 topics (we can choose freely) > > I can help with this, but I will start with sahara-tests (see below); > during > the day, if no one else is working on other parts, I will start reading > some > of those other documents. > > > > > From: > > https://docs.openstack.org/sahara-tests/latest/ it seems very direct, > maybe > > tosky and I can take a look at it (everyone is free to do so as well) > > I will focus on this at first. > > > > > > If anyone needs help, or the rabbit hole grows too much we can always > > reorganize the split. > > > > What do you think of it? > > I would say: let's start reading, and maybe we can find that there are some > topics that can be covered throughout all the documents: reference to > technologies that are no more valid, or the reference to the pre-built > images, > etc, and fix them first. Let's see. > > Ciao > -- > Luigi > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Tue Feb 6 15:35:51 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 6 Feb 2018 10:35:51 -0500 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 6:53 AM, Telles Nobrega wrote: > Hi folks, > > as we discussed in our last meeting, tomorrow (Wednesday 7th) we are going > to do a first overview of our documentation in order to gather the maximum > information of where we need to fix, add or remove stuff so we don't waste > time at PTG on this. +1 > > We can fix small problems but the main goal is to have a list of places that > need fixing, so we can use during Rocky cycle as a guide for documentation > improvement. > > In order to maiximize our reach it would good to split where each of us will > be looking at the documentation, so I thought: > > From: > https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since > user guide seems to be biggest one person would work on User Guide, and > other 3 each work on 2 topics (we can choose freely) > > From: > https://docs.openstack.org/sahara-tests/latest/ it seems very direct, maybe > tosky and I can take a look at it (everyone is free to do so as well) There's the saharaclient docs as well - but they are quite short and I already read through most of them when I was working on the APIv2 stuff. So I'll take the client docs, plus whatever other section in the main doc that you want to assign me :) > > If anyone needs help, or the rabbit hole grows too much we can always > reorganize the split. > > What do you think of it? Good plan. Although we should define our focus a bit better. Are we just trying to find outdated/incorrect stuff, or should we also be trying to look at the big picture? By big picture I mean trying to identify gaps, or thinking about organization and not just content. > > Thanks > > > > > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tenobreg at redhat.com Tue Feb 6 15:45:26 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 06 Feb 2018 15:45:26 +0000 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 12:37 PM Jeremy Freudberg wrote: > On Tue, Feb 6, 2018 at 6:53 AM, Telles Nobrega > wrote: > > Hi folks, > > > > as we discussed in our last meeting, tomorrow (Wednesday 7th) we are > going > > to do a first overview of our documentation in order to gather the > maximum > > information of where we need to fix, add or remove stuff so we don't > waste > > time at PTG on this. > > +1 > > > > > We can fix small problems but the main goal is to have a list of places > that > > need fixing, so we can use during Rocky cycle as a guide for > documentation > > improvement. > > > > In order to maiximize our reach it would good to split where each of us > will > > be looking at the documentation, so I thought: > > > > From: > > https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since > > user guide seems to be biggest one person would work on User Guide, and > > other 3 each work on 2 topics (we can choose freely) > > > > From: > > https://docs.openstack.org/sahara-tests/latest/ it seems very direct, > maybe > > tosky and I can take a look at it (everyone is free to do so as well) > > There's the saharaclient docs as well - but they are quite short and I > already read through most of them when I was working on the APIv2 > stuff. So I'll take the client docs, plus whatever other section in > the main doc that you want to assign me :) > > > > > If anyone needs help, or the rabbit hole grows too much we can always > > reorganize the split. > > > > What do you think of it? > > Good plan. Although we should define our focus a bit better. Are we > just trying to find outdated/incorrect stuff, or should we also be > trying to look at the big picture? By big picture I mean trying to > identify gaps, or thinking about organization and not just content. > I don't believe this totally up to me, what do you think? I would say lets consider both cases, but work mainly on the outdated/incorrect stuff and at PTG/after we can work in more details on the big picture stuff. > > > > Thanks > > > > > > > > > > -- > > > > TELLES NOBREGA > > > > SOFTWARE ENGINEER > > > > Red Hat Brasil > > > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > > > tenobreg at redhat.com > > > > TRIED. TESTED. TRUSTED. > > Red Hat é reconhecida entre as melhores empresas para trabalhar no > Brasil > > pelo Great Place to Work. > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Feb 6 16:04:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Feb 2018 11:04:06 -0500 Subject: [openstack-dev] [Release-job-failures][ironic][release] release-post job for openstack/releases failed In-Reply-To: References: Message-ID: <1517932912-sup-5278@lrrr.local> Excerpts from zuul's message of 2018-02-06 00:17:48 +0000: > Build failed. > > - tag-releases http://logs.openstack.org/c3/c3ae8a1d87084bcee73e96e2f9e2d174ee610ed4/release-post/tag-releases/48a1f5a/ : TIMED_OUT in 32m 09s > - publish-static publish-static : SKIPPED > After looking at the logs from this failed job, it looks like the release itself was successful but the branch was not created properly (the job timed out before that step). I have run the script to create the branch by hand, so the patches submitted as part of the process came from me instead of the bot. Please treat them as automatically bot-generated patches anyway and take them over and fix them if they have problems (that usually only applies to the sphinx update for reno, but keep an eye on all of them just in case). Doug From giuseppe.decandia at gmail.com Tue Feb 6 16:21:46 2018 From: giuseppe.decandia at gmail.com (Giuseppe de Candia) Date: Tue, 6 Feb 2018 10:21:46 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Folks, I know the request is very late, but I wasn't aware of this SIG until recently. Would it be possible to present a new project to the Security SIG at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the project, sign on users and contributors and get feedback. For the past few months I have been working on a new project - Tatu [1]- to automate the management of SSH certificates (for both users and hosts) in OpenStack. Tatu allows users to generate SSH certificates with principals based on their Project role assignments, and VMs automatically set up their SSH host certificate (and related config) via Nova vendor data. The project also manages bastions and DNS entries so that users don't have to assign Floating IPs for SSH nor remember IP addresses. I have a working demo (including Horizon panels [2] and OpenStack CLI [3]), but am still working on the devstack script and patches [4] to get Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a demo video in the next few days. best regards, Pino References: 1. https://github.com/pinodeca/tatu (Please note this is still very much a work in progress, lots of TODOs in the code, very little testing and documentation doesn't reflect the latest design). 2. https://github.com/pinodeca/tatu-dashboard 3. https://github.com/pinodeca/python-tatuclient 4. https://review.openstack.org/#/q/tatu On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: > > On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: > >> Bug 968696 and System Roles. Needs to be addressed across the Service >> catalog. >> > > Thanks Adam, will add it to the list. I see it's been open since 2012! > > >> >> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: >> >>> Just a reminder as we have not had many uptakes yet.. >>> >>> Are there any projects (new and old) that would like to make use of the >>> security SIG for either gaining another perspective on security challenges >>> / blueprints etc or for help gaining some cross project collaboration? >>> >>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: >>> >>>> Hello All, >>>> >>>> I am seeking topics for the PTG from all projects, as this will be >>>> where we try out are new form of being a SIG. >>>> >>>> For this PTG, we hope to facilitate more cross project collaboration >>>> topics now that we are a SIG, so if your project has a security need / >>>> problem / proposal than please do use the security SIG room where a larger >>>> audience may be present to help solve problems and gain x-project consensus. >>>> >>>> Please see our PTG planning pad [0] where I encourage you to add to the >>>> topics. >>>> >>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>> >>>> -- >>>> Luke Hinds >>>> Security Project PTL >>>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue Feb 6 16:36:57 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 6 Feb 2018 16:36:57 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < giuseppe.decandia at gmail.com> wrote: > Hi Folks, > > I know the request is very late, but I wasn't aware of this SIG until > recently. Would it be possible to present a new project to the Security SIG > at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the > project, sign on users and contributors and get feedback. > > For the past few months I have been working on a new project - Tatu [1]- > to automate the management of SSH certificates (for both users and hosts) > in OpenStack. Tatu allows users to generate SSH certificates with > principals based on their Project role assignments, and VMs automatically > set up their SSH host certificate (and related config) via Nova vendor > data. The project also manages bastions and DNS entries so that users don't > have to assign Floating IPs for SSH nor remember IP addresses. > > I have a working demo (including Horizon panels [2] and OpenStack CLI > [3]), but am still working on the devstack script and patches [4] to get > Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a > demo video in the next few days. > > best regards, > Pino > > > References: > > 1. https://github.com/pinodeca/tatu (Please note this is still very > much a work in progress, lots of TODOs in the code, very little testing and > documentation doesn't reflect the latest design). > 2. https://github.com/pinodeca/tatu-dashboard > 3. https://github.com/pinodeca/python-tatuclient > 4. https://review.openstack.org/#/q/tatu > > > > Hi Giuseppe, of course you can! I will add you to the agenda. We could get your an hour if it allows more time for presenting and post discussion? We will be meeting in an allocated room on Monday (details to follow). https://etherpad.openstack.org/p/security-ptg-rocky Luke > > > On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: > >> >> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: >> >>> Bug 968696 and System Roles. Needs to be addressed across the Service >>> catalog. >>> >> >> Thanks Adam, will add it to the list. I see it's been open since 2012! >> >> >>> >>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: >>> >>>> Just a reminder as we have not had many uptakes yet.. >>>> >>>> Are there any projects (new and old) that would like to make use of the >>>> security SIG for either gaining another perspective on security challenges >>>> / blueprints etc or for help gaining some cross project collaboration? >>>> >>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: >>>> >>>>> Hello All, >>>>> >>>>> I am seeking topics for the PTG from all projects, as this will be >>>>> where we try out are new form of being a SIG. >>>>> >>>>> For this PTG, we hope to facilitate more cross project collaboration >>>>> topics now that we are a SIG, so if your project has a security need / >>>>> problem / proposal than please do use the security SIG room where a larger >>>>> audience may be present to help solve problems and gain x-project consensus. >>>>> >>>>> Please see our PTG planning pad [0] where I encourage you to add to >>>>> the topics. >>>>> >>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>> >>>>> -- >>>>> Luke Hinds >>>>> Security Project PTL >>>>> >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Tue Feb 6 16:38:29 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Tue, 6 Feb 2018 16:38:29 +0000 Subject: [openstack-dev] [monasca] Remove MySQL Connector from monasca-thresh Message-ID: <484529d23b1547a89a830e2febeb9f09@R01UKEXCASM126.r01.fujitsu.local> Hello, During the internal legal check we've noticed that monasca-threshold Java archive includes MySQL Connector which is licensed under GPLv2. This license restricts the distribution of the consuming project and is not aligned with OpenStack licensing requirements [1]. I'm proposing to remove this library in Rocky release and use the already included Drizzle JDBC [2]. Best greetings Witek [1] https://governance.openstack.org/tc/reference/licensing.html [2] https://storyboard.openstack.org/#!/story/2001522 From giuseppe.decandia at gmail.com Tue Feb 6 16:52:59 2018 From: giuseppe.decandia at gmail.com (Giuseppe de Candia) Date: Tue, 6 Feb 2018 10:52:59 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Luke, Fantastic! An hour would be great if the schedule allows - there are lots of different aspects we can dive into and potential future directions the project can take. thanks! Pino On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: > > > On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < > giuseppe.decandia at gmail.com> wrote: > >> Hi Folks, >> >> I know the request is very late, but I wasn't aware of this SIG until >> recently. Would it be possible to present a new project to the Security SIG >> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >> project, sign on users and contributors and get feedback. >> >> For the past few months I have been working on a new project - Tatu [1]- >> to automate the management of SSH certificates (for both users and hosts) >> in OpenStack. Tatu allows users to generate SSH certificates with >> principals based on their Project role assignments, and VMs automatically >> set up their SSH host certificate (and related config) via Nova vendor >> data. The project also manages bastions and DNS entries so that users don't >> have to assign Floating IPs for SSH nor remember IP addresses. >> >> I have a working demo (including Horizon panels [2] and OpenStack CLI >> [3]), but am still working on the devstack script and patches [4] to get >> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >> demo video in the next few days. >> >> best regards, >> Pino >> >> >> References: >> >> 1. https://github.com/pinodeca/tatu (Please note this is still very >> much a work in progress, lots of TODOs in the code, very little testing and >> documentation doesn't reflect the latest design). >> 2. https://github.com/pinodeca/tatu-dashboard >> 3. https://github.com/pinodeca/python-tatuclient >> 4. https://review.openstack.org/#/q/tatu >> >> >> >> > Hi Giuseppe, of course you can! I will add you to the agenda. We could get > your an hour if it allows more time for presenting and post discussion? > > We will be meeting in an allocated room on Monday (details to follow). > > https://etherpad.openstack.org/p/security-ptg-rocky > > Luke > > > > >> >> >> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: >> >>> >>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: >>> >>>> Bug 968696 and System Roles. Needs to be addressed across the Service >>>> catalog. >>>> >>> >>> Thanks Adam, will add it to the list. I see it's been open since 2012! >>> >>> >>>> >>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: >>>> >>>>> Just a reminder as we have not had many uptakes yet.. >>>>> >>>>> Are there any projects (new and old) that would like to make use of >>>>> the security SIG for either gaining another perspective on security >>>>> challenges / blueprints etc or for help gaining some cross project >>>>> collaboration? >>>>> >>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> I am seeking topics for the PTG from all projects, as this will be >>>>>> where we try out are new form of being a SIG. >>>>>> >>>>>> For this PTG, we hope to facilitate more cross project collaboration >>>>>> topics now that we are a SIG, so if your project has a security need / >>>>>> problem / proposal than please do use the security SIG room where a larger >>>>>> audience may be present to help solve problems and gain x-project consensus. >>>>>> >>>>>> Please see our PTG planning pad [0] where I encourage you to add to >>>>>> the topics. >>>>>> >>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>> >>>>>> -- >>>>>> Luke Hinds >>>>>> Security Project PTL >>>>>> >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Tue Feb 6 17:26:20 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 6 Feb 2018 12:26:20 -0500 Subject: [openstack-dev] [sahara] Pre-PTG Doc Day In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 10:45 AM, Telles Nobrega wrote: > > > On Tue, Feb 6, 2018 at 12:37 PM Jeremy Freudberg wrote: >> >> On Tue, Feb 6, 2018 at 6:53 AM, Telles Nobrega wrote: >> > Hi folks, >> > >> > as we discussed in our last meeting, tomorrow (Wednesday 7th) we are going >> > to do a first overview of our documentation in order to gather the maximum >> > information of where we need to fix, add or remove stuff so we don't waste >> > time at PTG on this. >> >> +1 >> >> > >> > We can fix small problems but the main goal is to have a list of places that >> > need fixing, so we can use during Rocky cycle as a guide for documentation >> > improvement. >> > >> > In order to maiximize our reach it would good to split where each of us will >> > be looking at the documentation, so I thought: >> > >> > From: >> > https://docs.openstack.org/sahara/latest/ we can split in 4 parts, since >> > user guide seems to be biggest one person would work on User Guide, and >> > other 3 each work on 2 topics (we can choose freely) >> > >> > From: >> > https://docs.openstack.org/sahara-tests/latest/ it seems very direct, maybe >> > tosky and I can take a look at it (everyone is free to do so as well) >> >> There's the saharaclient docs as well - but they are quite short and I >> already read through most of them when I was working on the APIv2 >> stuff. So I'll take the client docs, plus whatever other section in >> the main doc that you want to assign me :) >> >> > >> > If anyone needs help, or the rabbit hole grows too much we can always >> > reorganize the split. >> > >> > What do you think of it? >> >> Good plan. Although we should define our focus a bit better. Are we >> just trying to find outdated/incorrect stuff, or should we also be >> trying to look at the big picture? By big picture I mean trying to >> identify gaps, or thinking about organization and not just content. > > > > I don't believe this totally up to me, what do you think? I would say lets consider both cases, but work mainly on the outdated/incorrect stuff and at PTG/after we can work in more details on the big picture stuff. That plan makes sense to me. >> >> > >> > Thanks >> > >> > >> > >> > >> > -- >> > >> > TELLES NOBREGA >> > >> > SOFTWARE ENGINEER >> > >> > Red Hat Brasil >> > >> > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo >> > >> > tenobreg at redhat.com >> > >> > TRIED. TESTED. TRUSTED. >> > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil >> > pelo Great Place to Work. >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From victor.morales at intel.com Tue Feb 6 17:31:32 2018 From: victor.morales at intel.com (Morales, Victor) Date: Tue, 6 Feb 2018 17:31:32 +0000 Subject: [openstack-dev] [neutron] PTL candidacy for Rocky In-Reply-To: References: Message-ID: +1, even if my vote doesn’t count. From: Miguel Lavalle Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, February 5, 2018 at 11:21 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [neutron] PTL candidacy for Rocky Hello OpenStack Community, I write this to submit my candidacy for the Neutron PTL position during the Rocky cycle. I had the privilege of being the project's PTL for most of the Queens release series and want to have another opportunity helping the team and the community to deliver more and better networking functionality. I have worked for the technology industry 37+ years. After many years in management, I decided to return to the "Light Side of the Force", the technical path, and during the San Diego Summit in 2012 told the Neutron (Quantum at the time) PTL that one day I wanted to be a member of the core team. He and the team welcomed me and that started the best period of my career, not only for the never ending learning experiences, but more importantly, for the many talented women and men that I have met along the way. Over these past few years I worked for Rackspace, helping them to deploy and operate Neutron in their public cloud, IBM in their Linux Technology Center, and currently for Huawei, as their Neutron upstream development lead. During the Queens release the team made significant progress in the following fronts: * Continued with the adoption of Oslo Versioned Objects in the DB layer * Implemented QoS rate limits for floating IPs * Delivered the FWaaS V2.0 API * Concluded the implementation of the logging API for security groups, which implements a way to capture and store events related to security groups. * Continued moving externally referenced items to neutron-lib and adopting them in Neutron and the Stadium projects * Welcomed VPNaaS back into the Stadium after the team put it back in shape * Improved team processes such as having a pre-defined weekly schedule for team members to act as bug triagers, gave W+ to additional core members in neutron-lib and re-scheduled the Neutron drivers meeting on alternate days and hours to enable attendance of more people across different time zones Some of the goals that I propose for the team to pursue during the Rocky cycle are: * Finish the implementation of multiple port binding to solve the migration between VIF types in a generic way so operators can switch easily between backends. This is a joint effort with the Nova team * Implement QoS minimum bandwidth allocation in the Placement API to support scheduling of instances based on the network bandwidth available in hosts. This is another joint effort with the Nova team * Synchronize the adoption of the DB layer engine facade with the adoption of Oslo Versioned Objects to avoid situations where they don't cooperate nicely * Implement port forwarding based on floating IPs * Continue moving externally referenced items to neutron-lib and adopting them in Neutron and the Stadium projects. Finish documenting extensions in the API reference. Start the move of generic DB functionality to the library * Expand the work done with the logging API in security groups to FWaaS v2.0 * Continue efforts in expanding our team and making its work easier. While we had some success during Queens, this is an area where we need to maintain our focus Thank you for your consideration and for taking the time to read this Miguel Lavalle (mlavalle) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Feb 6 18:04:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Feb 2018 12:04:26 -0600 Subject: [openstack-dev] [nova] Notification update week 6 In-Reply-To: <1517844723.7728.13@smtp.office365.com> References: <1517844723.7728.13@smtp.office365.com> Message-ID: <9470f0f9-14b2-d535-70c4-ee5c3c64ac41@gmail.com> On 2/5/2018 9:32 AM, Balázs Gibizer wrote: > Introduce instance.lock and instance.unlock notifications > --------------------------------------------------------- > A specless bp has been proposed to the Rocky cycle > https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances > > Some preliminary discussion happened in an earlier patch > https://review.openstack.org/#/c/526251/ > > Add the user id and project id of the user initiated the instance > action to the notification > ----------------------------------------------------------------- > A new bp has been proposed > https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications > > As the user who initiates the instance action (e.g. reboot) could be > different from the user owning the instance it would make sense to > include the user_id and project_id of the action initiatior to the > versioned instance action notifications as well. Both should be mentioned during the 'open discussion' part of the weekly nova meeting but at first glance I think these are both OK. -- Thanks, Matt From zbitter at redhat.com Tue Feb 6 18:23:52 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 6 Feb 2018 13:23:52 -0500 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: References: Message-ID: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> On 31/01/18 01:49, Goutham Pratapa wrote: > *Kingbird (The Multi Region orchestrator):* > > We are proud to announce kingbird is not only a centralized quota and > resource-manager but also a  Multi-region Orchestrator. I'd invite you to consider coming up with a different short description for the project, because this one reads ambiguously. It can be interpreted as either an orchestrator that works across multiple regions, or a tool that 'orchestrates' multiple regions for some new definition of 'orchestration' (and I regret that we already have more than one). I gather you mean the latter; the former already exists in OpenStack. cheers, Zane. From whayutin at redhat.com Tue Feb 6 18:58:09 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 6 Feb 2018 13:58:09 -0500 Subject: [openstack-dev] [tripleo] how to reproduce tripleo ci Message-ID: Greetings, The TripleO-CI team has added a recreate / reproduce script to all the tripleo upstream ci jobs as an artifact [1-2] much like the devstack reproduce.sh script. If you find yourself in need of recreating a tripleo ci job please take a look at the instructions. At this time the reproduction of ci is only supported by using an openstack cloud to provision test nodes, libvirt and other approaches are not yet supported but are on the roadmap. Thank you! [1] https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html [2] http://tripleo.org/contributor/reproduce-ci.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Feb 6 19:53:59 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 6 Feb 2018 11:53:59 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 8th Edition Message-ID: Note: this is the eighth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126775.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Focus is on finishing the blueprints that have granted FFE; High/Critical bugs; CI stabilization; Rocky planning. +--> We have 3 weeks to produce a Queens release candidate! +--> Welcome to our new contributor Avishay Machluf! +------------------------------+ | Continuous Integration | +------------------------------+ +--> TripleO CI squad is proud to announce a new tool to reproduce any CI job in your environment: https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html +--> Rover is Sagi and ruck is Rafael. Please let them know any new CI issue. +--> Master promotion is 14 days, Pike is 0 days and Ocata is 9 days. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Reviews are *highly* needed on FFU, Queens upgrade workflow (also testing blocked) +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Rocky planning, no major updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> Rocky planning, no major updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Team is planning work in Rocky +--> Roles management work has been merged! +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> A lot of reviews were made last week, thanks everyone! +--> The squad needs review, please check the etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Routed control plane is almost merged, waiting for last reviews. +--> IPsec integration is now tested! +--> Rocky planning in progress. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-------------+ | Owl facts | +-------------+ Owls come in groups called a "parliament", or a "stare". Source: http://www.writers-free-reference.com/172groupnames.htm Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Feb 6 20:09:44 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 6 Feb 2018 15:09:44 -0500 Subject: [openstack-dev] [Zun] PTL non candidacy Message-ID: Hi all, Just let you know that I won't run for Zun PTL for Rocky because we already have a good PTL candidate: https://review.openstack.org/#/c/541187/ I am happy to see that we are able to circulate the PTL role and this is an indication that our project become mature and our community is healthy. I will definitely continue my contribution to Zun regardless of I am the PTL or not. It is a pleasure to work with you and I am looking forwarding to working with our new PTL to continue to build our project. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Tue Feb 6 20:18:23 2018 From: kendall at openstack.org (Kendall Waters) Date: Tue, 6 Feb 2018 14:18:23 -0600 Subject: [openstack-dev] Feb 8 CFP Deadline - OpenStack Summit Vancouver Message-ID: Hi everyone, The Vancouver Summit CFP closes in two days: February 8 at 11:59pm Pacific Time (February 9 at 6:59am UTC). For the Vancouver, the Summit Tracks have evolved to cover the entire open infrastructure landscape. Get your talks in for: • Container infrastructure • Edge computing • CI/CD • HPC/GPU/AI • Open source community • OpenStack private, public and hybrid cloud The Programming Committees for each Track have provided suggest topics for Summit sessions. View topic ideas for each track and submit your proposals before this week's deadline! If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at redhat.com Tue Feb 6 20:34:38 2018 From: brad at redhat.com (Brad P. Crochet) Date: Tue, 06 Feb 2018 20:34:38 +0000 Subject: [openstack-dev] [tripleo] how to reproduce tripleo ci In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 1:59 PM Wesley Hayutin wrote: > Greetings, > > The TripleO-CI team has added a recreate / reproduce script to all the > tripleo upstream ci jobs as an artifact [1-2] much like the devstack > reproduce.sh script. If you find yourself in need of recreating a tripleo > ci job please take a look at the instructions. > > At this time the reproduction of ci is only supported by using an > openstack cloud to provision test nodes, libvirt and other approaches are > not yet supported but are on the roadmap. > > Great work TripleO-CI team! I've already used this a number of times and it has functioned quite well! > Thank you! > > [1] > https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html > [2] http://tripleo.org/contributor/reproduce-ci.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 6 21:31:06 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 06 Feb 2018 21:31:06 +0000 Subject: [openstack-dev] [All][Election][Tacker][Swift][Stable][Packaging_Rpm][OpenStack Helm][OpenStackClient][Manila][Loci][Freezer][Kury][Dragonflow][Designate][Cyborg] Last Days for Nominations Message-ID: Hello All! A quick reminder that we are in the last hours for PTL candidate nominations. If you want to stand for PTL, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Election statistics[2]: This means that with approximately 2 days left, 13 projects will be deemed leaderless. In this case the TC will oversee PTL selection as described by [3]. Thank you, Kendall Nelson(diablo_rojo) [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy [2] Assuming the open reviews below are validated https://review.openstack.org/#/q/is:open+project:openstack/election [3] http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 6 21:46:47 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 06 Feb 2018 21:46:47 +0000 Subject: [openstack-dev] [PTG] Game Night! Message-ID: Hello PTG Goers, Over the last few PTG incarnations, what started as a group of ~5 nerds playing DnD has gradually grown to easily a dozen people playing a variety of tabletop games. For this PTG, I thought I would formalize it a bit more and plan what games are being brought and who is interested. I put together this etherpad[1] to collect a list of games people are bringing and who wants to play. I was thinking we could get together Thursday later in the evening so there is still time for people to get together with their project teams for dinner plans. Not confirmed on where we will play yet (maybe the lobby bar) so if you can provide email + irc nick so I can get a hold of you that would be great! Nerd out, -Kendall Nelson (diablo_rojo) [1]https://etherpad.openstack.org/p/DUB_Game_Night -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 6 23:39:11 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 06 Feb 2018 23:39:11 +0000 Subject: [openstack-dev] [All][Election][Tacker][Swift][Stable][Packaging_Rpm][OpenStack Helm][OpenStackClient][Manila][Loci][Freezer][Kury][Dragonflow][Designate][Cyborg] Last Days for Nominations In-Reply-To: References: Message-ID: I should also note that it's significantly less than two days at this point (I had said two days in the previous email in this thread) and want to make sure you are all aware of the deadline. -Kendall (diablo_rojo) On Tue, 6 Feb 2018, 1:31 pm Kendall Nelson, wrote: > Hello All! > > A quick reminder that we are in the last hours for PTL candidate > nominations. > > If you want to stand for PTL, don't delay, follow the instructions > at [1] to make sure the community knows your intentions. > > Make sure your nomination has been submitted to the openstack/election > repository and approved by election officials. > > Election statistics[2]: > > This means that with approximately 2 days left, 13 projects will > be deemed leaderless. In this case the TC will oversee PTL selection as > described by [3]. > > Thank you, > > Kendall Nelson(diablo_rojo) > > [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy > [2] Assuming the open reviews below are validated > https://review.openstack.org/#/q/is:open+project:openstack/election > [3] > http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Feb 7 00:39:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Feb 2018 18:39:16 -0600 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> Message-ID: <52bebfe9-fc5b-8b44-8fb9-d136a01f3235@gmail.com> On 2/6/2018 8:44 AM, Emilien Macchi wrote: > TC voted (but not approved yet) and selected 2 goals that will likely be > approved if no strong voice is raised this week: > > Remove mox > https://review.openstack.org/#/c/532361/ > > Toggle the debug option at runtime > https://review.openstack.org/#/c/534605/ > > If you have any comment on these 2 selected goals, please say it now > otherwise TC will approve it and we'll discuss about details at the PTG. Have we had any substantial input from the elected user committee members about their priority for the proposed goals? We have a lot of TC input on these, and some (but not much) developer feedback, but it seems if we have an elected set of people that represent our users, we should get their input on the goals we plan to prioritize community-wide in the upcoming release, yeah? -- Thanks, Matt From MM9745 at att.com Wed Feb 7 01:32:22 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Wed, 7 Feb 2018 01:32:22 +0000 Subject: [openstack-dev] [openstack-helm] Rocky PTL candidacy Message-ID: <7C64A75C21BB8D43BD75BB18635E4D89654DF9F5@MOSTLS1MSGUSRFF.ITServices.sbc.com> Hi Team, I'd like to announce my candidacy to continue as the OpenStack-Helm PTL for the Rocky cycle. I have been working in the OpenStack community for about a year and a half, helping lead upstream development efforts for my employer. In this capacity I worked to bring new OpenStack contributors up to speed, help them understand and keep them aligned to community goals, and to be the voice of community and upstream-first development within my workplace. More recently I have had the privilege of taking a more hands-on role with OpenStack-Helm, and have served as PTL as it transitioned to become an Official OpenStack project. OpenStack-Helm is a technically impressive project, and has the potential to not only redefine how OpenStack clouds are deployed and operated, but also to help redefine what it means to be an OpenStack cloud, by allowing simple stitching together of OpenStack and third-party components into custom solutions. However, the reason OpenStack-Helm is successful is because of a passionate team of talented engineers who work tirelessly to push the boundary of what's possible with containerized technology. I'm humbled daily by this team, and a key goal of mine in the Rocky cycle is to enable you, help you succeed, and grow your ranks. I see PTL as a servant leader role. In the Rocky release I would like to accomplish the following: * Cut our 1.0 OpenStack-Helm release * Build the skills and experience of the existing OpenStack-Helm team members * Create mentoring/onboarding opportunities to onboard new team members quickly * Build gates for Ocata and begin to ensure compatibility with Pike and Queens * Develop and formalize our plans for OpenStack-Helm releases * Groom a more diverse core reviewer team * Harden OpenStack-Helm resiliency, scalability, and day 2 ops * Create ops-oriented documentation for OpenStack-Helm * Achieve OpenStack-Helm cross-gating with other projects * Collaborate with other project teams on their charts' development Thank you for your consideration! Matt McEuen -------------- next part -------------- An HTML attachment was scrubbed... URL: From liujiong at gohighsec.com Wed Feb 7 03:13:23 2018 From: liujiong at gohighsec.com (Jiong Liu) Date: Wed, 7 Feb 2018 11:13:23 +0800 Subject: [openstack-dev] [barbican] candidacy for PTL Message-ID: <000f01d39fc1$9cf5b810$d6e12830$@gohighsec.com> +1, thanks Dave for leading Barbican team in the past cycles > Message: 20 > Date: Mon, 05 Feb 2018 15:13:31 -0500 > From: Ade Lee > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: [openstack-dev] [barbican] candidacy for PTL > Message-ID: <1517861611.9647.57.camel at redhat.com> > Content-Type: text/plain; charset="UTF-8" > Fellow Barbicaneers, > I'd like to nominate myself to serve as Barbican PTL through the > Rocky cycle. > Dave has done a great job at keeping the project growing and I'd > like to continue his good work. > This is an exciting time for Barbican. With more distributions > and installers incorporating Barbican, and a renewed focus on > meeting security and compliance requirements, deployers will be > relying on Barbican to securely implement some of the use cases > that we've been working on for the past few years (volume encryption, > image signing, swift object encryption etc.). > Moreover, work has been progressing in having castellan adopted as > a base service for OpenStack applications - hopefully increasing > the deployment of secure secret management across the board. > In particular, for the Rocky cycle, I'd like to continue the progress > made in Queens to: > 1) Grow the Barbican team of contributors and core reviewers. > 2) Help drive further collaboration with other Openstack projects > with joint blueprints. > 3) Help ensure that deployments are successful by keeping up on > bugs fixes and backports. > 4) Help develop new secret store plugins, in particular : > -- a castallan secret store that will allow us to use vault and > custodia backends. > -- SGX? I'm excited about introducing SGX enhancement into barbican. We could implement another crypto plugin with SGX and it's much safer than the default simple crypto plugin. BTW, Intel is testing this feature and the demo is open source on github https://github.com/cloud-security-research/sgx-kms > 5) Continue the stability and maturity enhancements. > Thank you in advance for this opportunity to serve. > --Ade Lee (alee) From kevin.zhao at linaro.org Wed Feb 7 03:25:09 2018 From: kevin.zhao at linaro.org (Kevin Zhao) Date: Wed, 7 Feb 2018 11:25:09 +0800 Subject: [openstack-dev] [Zun] PTL non candidacy In-Reply-To: References: Message-ID: Hongbin, Thanks for your enormous contribution to Zun as PTL for last several releases. Fortunately, we will still be happy to work together in Zun. I will try my best to support Shengqin in the next release, hope Zun to be more mature and in the future. On 7 February 2018 at 04:09, Hongbin Lu wrote: > Hi all, > > Just let you know that I won't run for Zun PTL for Rocky because we > already have a good PTL candidate: > > https://review.openstack.org/#/c/541187/ > > I am happy to see that we are able to circulate the PTL role and this is > an indication that our project become mature and our community is healthy. > I will definitely continue my contribution to Zun regardless of I am the > PTL or not. > > It is a pleasure to work with you and I am looking forwarding to working > with our new PTL to continue to build our project. > > Best regards, > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Wed Feb 7 05:01:17 2018 From: sam47priya at gmail.com (Sam P) Date: Wed, 7 Feb 2018 14:01:17 +0900 Subject: [openstack-dev] [masakari] Rocky PTL Candidacy Message-ID: Hello everyone, I’m Sampath Priyankara. I would like to announce my candidacy to continue as PTL of Masakari for the Rocky development cycle. In Queens, we mainly focused on bugfixing, documentation, add new features and become an official OpenStack project which is one of the most important achievements from Queens. For Rocky, I would like to continue to focus on: - Recovery method customization feature and integration with Mistral - OpenStack Ansible support for Masakari - Masakari Dashboard ( Horizon plugin for Masakari) - Continue to work with OpenStack HA team to develop OpenStack resource agents as an alternative for masakari-monitors. - Ironic Bare Metal instance HA support While working on above development, I think it is equally important to focus on raising awareness and adoption of Masakari, improve diversity of the team, and improve cross project/community communication and support. I will make my best effort to find more users, more reviewers and more developers for Masakari. Finally, I would like to thank you all Masakari contributors for your hard work over past cycles, and all OpenStack community members for your valuable comments, opportunities, and all the help you gave. Also, thank you for considering my candidacy. --- Regards, Sampath ​ (samP)​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Wed Feb 7 05:38:12 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Wed, 7 Feb 2018 14:38:12 +0900 Subject: [openstack-dev] [glance][cinder]Question about cinder as glance store Message-ID: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> Hello, I'm planning to use cinder as glance store. And, I'll setup cinder to connect storage by iSCSI multipath. In this case, can I run glance-api and cinder-volume on the same node? In my understanding, glance-api will attach a volume to own node and write a uploaded image to the volume if glance backend is cinder. I afraid that the race condition of cinder-volume's iSCSI operations and glance-api's iSCSI operations. Is there possibility of occurring it? -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From openstack.org at sodarock.com Wed Feb 7 06:23:11 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Tue, 6 Feb 2018 22:23:11 -0800 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: +1 On Mon, Feb 5, 2018 at 10:12 AM, Julia Kreger wrote: > I would like to nominate Hironori Shiina to ironic-core. He has been > working in the ironic community for some time, and has been helping > over the past several cycles with more complex features. He has > demonstrated an understanding of Ironic's code base, mechanics, and > overall community style. His review statistics are also extremely > solid. I personally have a great deal of trust in his reviews. > > I believe he would make a great addition to our team. > > Thanks, > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Feb 7 08:13:01 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Feb 2018 16:13:01 +0800 Subject: [openstack-dev] [heat][rally] What should we do with legacy-rally-dsvm-fakevirt-heat Message-ID: Hi heat and rally team Right now, in heat's zuul jobs. We still got one legacy job to change `legacy-rally-dsvm-fakevirt-heat` [1] which I already put a patch out here [2], but after discussion with infra team, it seems best if we can define this in rally, and reference it in heat. So my question to rally team for all these will be, do we still need this job? and how you guys think about if we put that into rally? [1] https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L6979 [2] https://review.openstack.org/#/c/509141 -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Feb 7 09:06:22 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Feb 2018 10:06:22 +0100 Subject: [openstack-dev] [all] [tc] Community Goals for Rocky In-Reply-To: <52bebfe9-fc5b-8b44-8fb9-d136a01f3235@gmail.com> References: <397BB99F-D7B2-47B3-9724-E8B628EFD5C2@cern.ch> <76c4df1e-2e82-96c6-a983-36040855a42d@gmail.com> <52bebfe9-fc5b-8b44-8fb9-d136a01f3235@gmail.com> Message-ID: Matt Riedemann wrote: > On 2/6/2018 8:44 AM, Emilien Macchi wrote: >> TC voted (but not approved yet) and selected 2 goals that will likely >> be approved if no strong voice is raised this week: >> >> Remove mox >> https://review.openstack.org/#/c/532361/ >> >> Toggle the debug option at runtime >> https://review.openstack.org/#/c/534605/ >> >> If you have any comment on these 2 selected goals, please say it now >> otherwise TC will approve it and we'll discuss about details at the PTG. > > Have we had any substantial input from the elected user committee > members about their priority for the proposed goals? We have a lot of TC > input on these, and some (but not much) developer feedback, but it seems > if we have an elected set of people that represent our users, we should > get their input on the goals we plan to prioritize community-wide in the > upcoming release, yeah? We got a few comments on the reviews and the TC office hours. I'll shoot a pointer to this thread to openstack-ops to make sure they are aware of it. -- Thierry Carrez (ttx) From Paul.Vaduva at enea.com Wed Feb 7 09:58:00 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 7 Feb 2018 09:58:00 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Message-ID: Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Feb 7 10:26:47 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 7 Feb 2018 11:26:47 +0100 Subject: [openstack-dev] [tripleo] how to reproduce tripleo ci In-Reply-To: References: Message-ID: On 6.2.2018 19:58, Wesley Hayutin wrote: > Greetings, > > The TripleO-CI team has added a recreate / reproduce script to all the > tripleo upstream ci jobs as an artifact [1-2] much like the devstack > reproduce.sh script. If you find yourself in need of recreating a tripleo > ci job please take a look at the instructions. That looks great, kudos CI team! > > At this time the reproduction of ci is only supported by using an openstack > cloud to provision test nodes, libvirt and other approaches are not yet > supported but are on the roadmap. If someone needs to deploy on libvirt something resembling CI, it seems generally possible to use the featuresets as general_config for OOOQ on libvirt too (with small tweaks). I've used featuresets to deploy overcloud (on a pre-existing reusable undercloud) when working on Kubernetes/OpenShift [3-4]. The main thing to watch out for is that the featuresets may be setting NIC configs unsuitable for libvirt env. I think those are set either via the `network_isolation_args` parameter directly in the featureset, or via referencing a t-h-t scenario which includes `OS::TripleO::*::Net::SoftwareConfig` overrides in its resource registry. So one has to make sure to override those overrides. :) Jirka > > Thank you! > > [1] > https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html > [2] http://tripleo.org/contributor/reproduce-ci.html [3] https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/ [4] https://www.jistr.com/blog/2018-01-04-openshift-origin-in-tripleo/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From geguileo at redhat.com Wed Feb 7 10:27:04 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 7 Feb 2018 11:27:04 +0100 Subject: [openstack-dev] [glance][cinder]Question about cinder as glance store In-Reply-To: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> References: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> Message-ID: <20180207102704.xq7fch4apuqimqif@localhost> On 07/02, Rikimaru Honjo wrote: > Hello, > > I'm planning to use cinder as glance store. > And, I'll setup cinder to connect storage by iSCSI multipath. > > In this case, can I run glance-api and cinder-volume on the same node? > > In my understanding, glance-api will attach a volume to own node and > write a uploaded image to the volume if glance backend is cinder. > I afraid that the race condition of cinder-volume's iSCSI operations > and glance-api's iSCSI operations. > Is there possibility of occurring it? > -- > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > Rikimaru Honjo > E-mail:honjo.rikimaru at po.ntt-tx.co.jp > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi, When properly set with the right configuration and the right system and OpenStack packages, Cinder, OS-Brick, and Nova no longer have race conditions with iSCSI operations anymore (single or multipathed), not even with drivers that do "shared target". So I would assume that Glance won't have any issues either as long as it's properly making the Cinder and OS-Brick calls. Cheers, Gorka. From balazs.gibizer at ericsson.com Wed Feb 7 10:39:10 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 7 Feb 2018 11:39:10 +0100 Subject: [openstack-dev] [nova] Notification update week 6 In-Reply-To: <9470f0f9-14b2-d535-70c4-ee5c3c64ac41@gmail.com> References: <1517844723.7728.13@smtp.office365.com> <9470f0f9-14b2-d535-70c4-ee5c3c64ac41@gmail.com> Message-ID: <1517999950.7728.23@smtp.office365.com> On Tue, Feb 6, 2018 at 7:04 PM, Matt Riedemann wrote: > On 2/5/2018 9:32 AM, Balázs Gibizer wrote: >> Introduce instance.lock and instance.unlock notifications >> --------------------------------------------------------- >> A specless bp has been proposed to the Rocky cycle >> https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances >>  >> Some preliminary discussion happened in an earlier patch >> https://review.openstack.org/#/c/526251/ >> >> Add the user id and project id of the user initiated the instance >> action to the notification >> ----------------------------------------------------------------- >> A new bp has been proposed >> https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications >>  >> As the user who initiates the instance action (e.g. reboot) could be >> different from the user owning the instance it would make sense to >> include the user_id and project_id of the action initiatior to the >> versioned instance action notifications as well. > > Both should be mentioned during the 'open discussion' part of the > weekly nova meeting but at first glance I think these are both OK. I've added them to the agenda for tomorrow. cheers, gibi > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ifat.afek at nokia.com Wed Feb 7 11:24:24 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 7 Feb 2018 11:24:24 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: Message-ID: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Feb 7 11:28:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Feb 2018 12:28:51 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <1627c084-b57d-ae35-3649-fa35979ebe8d@gmail.com> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> <1627c084-b57d-ae35-3649-fa35979ebe8d@gmail.com> Message-ID: <7d643a97-36f2-1852-5a49-b35bd7def714@openstack.org> Lance Bragstad wrote: > On 02/05/2018 09:34 AM, Thierry Carrez wrote: >> Lance Bragstad wrote: >>> Colleen started a thread asking if there was a need for a baremetal/vm >>> group session [0], which generated quite a bit of positive response. Is >>> there still a possibility of fitting that in on either Monday or >>> Tuesday? The group is usually pretty large. >>> >>> [0] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html >> Yes, we can still allocate a 80-people room or a 30-people one. Let me >> know if you prefer Monday, Tuesday or both. > Awesome - we're collecting topics in an etherpad, but we're likely only > going to get to three or four of them [0] [1]. We can work those topics > into two sessions. One on Monday and one on Tuesday, just to break > things up in case other things are happening those days that people want > to get to. Looking at that etherpad, do you need the room allocated for all the day on Monday/Tuesday ? It's doable, but if you already know you won't do anything on Monday afternoon, we can make that room reservable then. What should the track name be ? Some people suggested "cross-project identity integration" instead of baremetal-vm. -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From dtantsur at redhat.com Wed Feb 7 11:39:38 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 7 Feb 2018 12:39:38 +0100 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <7d643a97-36f2-1852-5a49-b35bd7def714@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> <1627c084-b57d-ae35-3649-fa35979ebe8d@gmail.com> <7d643a97-36f2-1852-5a49-b35bd7def714@openstack.org> Message-ID: On 02/07/2018 12:28 PM, Thierry Carrez wrote: > Lance Bragstad wrote: >> On 02/05/2018 09:34 AM, Thierry Carrez wrote: >>> Lance Bragstad wrote: >>>> Colleen started a thread asking if there was a need for a baremetal/vm >>>> group session [0], which generated quite a bit of positive response. Is >>>> there still a possibility of fitting that in on either Monday or >>>> Tuesday? The group is usually pretty large. >>>> >>>> [0] >>>> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html >>> Yes, we can still allocate a 80-people room or a 30-people one. Let me >>> know if you prefer Monday, Tuesday or both. >> Awesome - we're collecting topics in an etherpad, but we're likely only >> going to get to three or four of them [0] [1]. We can work those topics >> into two sessions. One on Monday and one on Tuesday, just to break >> things up in case other things are happening those days that people want >> to get to. > > Looking at that etherpad, do you need the room allocated for all the day > on Monday/Tuesday ? It's doable, but if you already know you won't do > anything on Monday afternoon, we can make that room reservable then. > > What should the track name be ? Some people suggested "cross-project > identity integration" instead of baremetal-vm. ++ please do not use bm-vm, it confuses everyone not involved from the beginning > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Wed Feb 7 11:42:05 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Feb 2018 12:42:05 +0100 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches Message-ID: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Hi everyone, I was wondering if anyone would be interested in brainstorming the question of how to better align our release cycle and stable branch maintenance with the OpenStack downstream consumption models. That includes discussing the place of the distributions, the need for LTS, and where does the open source upstream project stop. I have hesitated to propose it earlier, as it sounds like a topic that should be discussed with the wider community at the Forum. And it will, but it feels like this needs a deeper pre-discussion in a productive setting, and tonyb and eumel8 have been proposing that topic on the missing topics etherpad[1], so we might as well take some time at the PTG to cover that. Would anyone be interested in such a discussion ? It would be scheduled on the Tuesday. How much time would we need ? I was thinking we could use only Tuesday afternoon. [1] https://etherpad.openstack.org/p/PTG-Dublin-missing-topics -- Thierry Carrez (ttx) From james.page at ubuntu.com Wed Feb 7 11:46:18 2018 From: james.page at ubuntu.com (James Page) Date: Wed, 07 Feb 2018 11:46:18 +0000 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: Hi Thierry On Wed, 7 Feb 2018 at 11:42 Thierry Carrez wrote: > Hi everyone, > > I was wondering if anyone would be interested in brainstorming the > question of how to better align our release cycle and stable branch > maintenance with the OpenStack downstream consumption models. That > includes discussing the place of the distributions, the need for LTS, > and where does the open source upstream project stop. > > I have hesitated to propose it earlier, as it sounds like a topic that > should be discussed with the wider community at the Forum. And it will, > but it feels like this needs a deeper pre-discussion in a productive > setting, and tonyb and eumel8 have been proposing that topic on the > missing topics etherpad[1], so we might as well take some time at the > PTG to cover that. > > Would anyone be interested in such a discussion ? It would be scheduled > on the Tuesday. How much time would we need ? I was thinking we could > use only Tuesday afternoon. I would be interested in participating in this (and I'll still be around Tuesday PM). Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Feb 7 13:00:26 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 7 Feb 2018 21:00:26 +0800 Subject: [openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2018.02.07 Message-ID: Hi Team, Weekly meeting happen starting UTC1500 at #openstack-cyborg , we will wrap up all the remaining patches and discuss PTG schedule. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Vaduva at enea.com Wed Feb 7 13:50:26 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 7 Feb 2018 13:50:26 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 7 13:54:24 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 7 Feb 2018 07:54:24 -0600 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: <20180207135424.GA11335@sm-xps> On Wed, Feb 07, 2018 at 12:42:05PM +0100, Thierry Carrez wrote: > Hi everyone, > > I was wondering if anyone would be interested in brainstorming the > question of how to better align our release cycle and stable branch > maintenance with the OpenStack downstream consumption models. That > includes discussing the place of the distributions, the need for LTS, > and where does the open source upstream project stop. > This would be great if we can get some distro/packaging representation in the room. From cedric.jeanneret at camptocamp.com Wed Feb 7 13:59:26 2018 From: cedric.jeanneret at camptocamp.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 7 Feb 2018 14:59:26 +0100 Subject: [openstack-dev] =?utf-8?q?=5BTripleO=5D_Meet_DoubleNO=C2=B3?= Message-ID: <0eb4ee1a-2d56-93b7-6723-7a159494b1f9@camptocamp.com> Dear all, In the need of a "lab TripleO" in order to validate updates before pushing to production, I created some ansible receipt, and a whole isolated network. This isolated network mimic 1:1 the current production, and is mainly based on libvirt. I named this "project" DoubleNO³, and it tells a lot of thing regarding the concepts and architecture: Double-NATed-tripleO Some explanations: On a dedicated baremetal, I've installed: - CentOS 7 - Libvirt components The main VMs are: - virtualized RedHat Satellite (in order to use repository snapshots) - virtualized undercloud (prod) - virtualized undercloud (lab) - virtualized computes (lab, 4 nodes) - virtualized controllers (lab, 3 nodes) - virtualized ceph-storages (lab, 2 nodes, with additional volumes for Ceph setup emulation) The lab instances share a network bridge (name it lab-trunk), and each VM has the "right" amount of network interfaces (we use bonding in prod). In order to isolate the lab, I've created a second layer, with a dedicated "prenat" instance (lab-prenat). This one has two interfaces: - one on a libvirt "NAT" network (eth0) - one on lab-trunk bridge (eth1) Regarding IPs, eth0 has a private IP in the 192.168.x.y scope, is the default route to Internet via the libvirt NAT. Eth1 has multiple IPs: - one that emulates the public gateway of our production deploy - all the IPMI addresses of our productive nodes The second pool allows to keep Ironic configuration 1:1 with production. In order to make IPMI calls, I've deployed VirtualBMC on the lab-prenat instance, and am using the qemu+ssh capability of libvirt in order to talk to the hypervisor and manage the VMs. All of that is working pretty nicely together. As I decided to use a virtual Undercloud node instead of baremetal, it was pretty easy to duplicate the current undercloud node, drop the stack, and redeploy the virtual lab in a swift way. Of course, all wasn't as painless as it seems, I had "some" issues with the network, especially for IPMI: I wanted to have the virtualBMC directly on the hypervisor, and had many issues with libvirt itpables rules. In the end, using the qemu+ssh was just the right, easy way to do things. And this prevent any IPMI listener to be displayed on the outside. But in the end: we actually have a 1:1 matching lab we can use in order to validate every updates, and an easy way to rollback to a previous consistent state using libvirt snapshots. Would anyone be interested in a more "academic" documentation about the whole stuff (especially the lab itself - satellite is an (interesting) option)? If so, where should I push that doc? Probably somewhere in "advanced deployment" or something like that. Unfortunately, I don't think I'll be able to open the ansible code soon, but at least the concepts can be explained, and configuration example provided. Cheers, C. -- Cédric Jeanneret Senior Linux System Administrator Infrastructure Solutions Camptocamp SA PSE-A / EPFL, 1015 Lausanne Phone: +41 21 619 10 32 Office: +41 21 619 10 02 Email: cedric.jeanneret at camptocamp.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 878 bytes Desc: OpenPGP digital signature URL: From pkovar at redhat.com Wed Feb 7 14:11:43 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 7 Feb 2018 15:11:43 +0100 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180207151143.ee78f1c849f84e925bba04b9@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From gong.yongsheng at 99cloud.net Wed Feb 7 15:01:25 2018 From: gong.yongsheng at 99cloud.net (Yongsheng Gong) Date: Wed, 7 Feb 2018 23:01:25 +0800 Subject: [openstack-dev] [requirements][release] FFE for constraints update for python-tackerclient bug-fix release Message-ID: Hi, The tacker client has na initial queens release which does not have right reno notes. Recently I have added some reno patches. And also the team wants a feature to land in the queens release. I have requested the newer release at https://review.openstack.org/#/c/541638/ The new feature is at https://review.openstack.org/#/c/541631/ Please see how we can get it released? Thanks Yong sheng gong Tacker -------------- next part -------------- An HTML attachment was scrubbed... URL: From gong.yongsheng at 99cloud.net Wed Feb 7 15:13:42 2018 From: gong.yongsheng at 99cloud.net (Yongsheng Gong) Date: Wed, 7 Feb 2018 23:13:42 +0800 Subject: [openstack-dev] [tacker] PTL candidacy from yong sheng gong Message-ID: This is my self-nomination to continue running as Tacker PTL for the Rocky cycle. Lots happened in Takcer Queens cycle. More documents, More features are coming in. VNFFG is enhanced, Kubernetes vim and container VNF are introduced. Private Zabbix based application monitoring is done too. Openstack client Tacker commands are also being developed. In Rocky cycle, I plan to - Continue to stablize tacker and make it of production quality. - Go on with tacker document. - Enhance container based VNF - Make a way to connect VM based VNF and container Based VNF - Introduce more types of VIM, such as public vim, SDN controller vim Thanks for your consideration. Yong sheng gong ———————————————————————————————————————————————— -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 23492 bytes Desc: not available URL: From lbragstad at gmail.com Wed Feb 7 15:58:59 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Feb 2018 09:58:59 -0600 Subject: [openstack-dev] [ptg] Dublin PTG proposed track schedule In-Reply-To: <7d643a97-36f2-1852-5a49-b35bd7def714@openstack.org> References: <27e2401b-9753-51b9-f465-eeb0281dc350@openstack.org> <34c23477-5508-417c-9193-26399a506f11@openstack.org> <349343ee-a79b-f6ba-50c0-dda08ec2aba1@openstack.org> <1627c084-b57d-ae35-3649-fa35979ebe8d@gmail.com> <7d643a97-36f2-1852-5a49-b35bd7def714@openstack.org> Message-ID: <8bcfd6aa-9cfc-7409-09f8-f492f8def4f2@gmail.com> On 02/07/2018 05:28 AM, Thierry Carrez wrote: > Lance Bragstad wrote: >> On 02/05/2018 09:34 AM, Thierry Carrez wrote: >>> Lance Bragstad wrote: >>>> Colleen started a thread asking if there was a need for a baremetal/vm >>>> group session [0], which generated quite a bit of positive response. Is >>>> there still a possibility of fitting that in on either Monday or >>>> Tuesday? The group is usually pretty large. >>>> >>>> [0] >>>> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html >>> Yes, we can still allocate a 80-people room or a 30-people one. Let me >>> know if you prefer Monday, Tuesday or both. >> Awesome - we're collecting topics in an etherpad, but we're likely only >> going to get to three or four of them [0] [1]. We can work those topics >> into two sessions. One on Monday and one on Tuesday, just to break >> things up in case other things are happening those days that people want >> to get to. > Looking at that etherpad, do you need the room allocated for all the day > on Monday/Tuesday ? It's doable, but if you already know you won't do > anything on Monday afternoon, we can make that room reservable then. I agree. I don't think we will need all of Monday and Tuesday. I wanted to get a rough schedule put together so we wouldn't need to reserve the entire room all day. So far we haven't received feedback on the proposed schedule, so it's probably safe to mark the times listed in the etherpad. > > What should the track name be ? Some people suggested "cross-project > identity integration" instead of baremetal-vm. I'm terrible with names. Since the topics are identity specific, that would make sense, but I do feel as though we high-jacked the session and the name =/ I think John Garbutt came up with the original name. I'm sure he can explain the reasoning for it better than I can. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From dmsimard at redhat.com Wed Feb 7 16:00:17 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Wed, 7 Feb 2018 11:00:17 -0500 Subject: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days In-Reply-To: References: Message-ID: Please note that the RDO test days have currently been re-scheduled to (at least) next week, February 15th and 16th. We are currently working our way through different issues and hope they'll be sorted out in time by then. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Feb 5, 2018 at 10:31 AM, David Moreau Simard wrote: > Hi everyone, > > We've started planning the deployment with the Kolla team, you can see > the etherpad from the "operator" perspective here: > https://etherpad.openstack.org/p/kolla-rdo-m3 > > We'll advertise the test days and how users can participate soon. > > Thanks, > > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard > wrote: >> Hi ! >> >> For those who might be unfamiliar with the RDO [1] community project: >> we hang out in #rdo, we don't bite and we build vanilla OpenStack >> packages. >> >> These packages are what allows you to leverage one of the deployment >> projects such as TripleO, PackStack or Kolla to deploy on CentOS or >> RHEL. >> The RDO community collaborates with these deployment projects by >> providing trunk and stable packages in order to let them develop and >> test against the latest and the greatest of OpenStack. >> >> RDO test days typically happen around a week after an upstream >> milestone has been reached [2]. >> The purpose is to get everyone together in #rdo: developers, users, >> operators, maintainers -- and test not just RDO but OpenStack itself >> as installed by the different deployment projects. >> >> We tried something new at our last test day [3] and it worked out great. >> Instead of encouraging participants to install their own cloud for >> testing things, we supplied a cloud of our own... a bit like a limited >> duration TryStack [4]. >> This lets users without the operational knowledge, time or hardware to >> install an OpenStack environment to see what's coming in the upcoming >> release of OpenStack and get the feedback loop going ahead of the >> release. >> >> We used Packstack for the last deployment and invited Packstack cores >> to deploy, operate and troubleshoot the installation for the duration >> of the test days. >> The idea is to rotate between the different deployment projects to >> give every interested project a chance to participate. >> >> Last week, we reached out to Kolla to see if they would be interested >> in participating in our next RDO test days [5] around February 8th. >> We supply the bare metal hardware and their core contributors get to >> deploy and operate a cloud with real users and developers poking >> around. >> All around, this is a great opportunity to get feedback for RDO, Kolla >> and OpenStack. >> >> We'll be advertising the event a bit more as the test days draw closer >> but until then, I thought it was worthwhile to share some context for >> this new thing we're doing. >> >> Let me know if you have any questions ! >> >> Thanks, >> >> [1]: https://www.rdoproject.org/ >> [2]: https://www.rdoproject.org/testday/ >> [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/ >> [4]: http://trystack.org/ >> [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html >> >> David Moreau Simard >> Senior Software Engineer | OpenStack RDO >> >> dmsimard = [irc, github, twitter] From prometheanfire at gentoo.org Wed Feb 7 16:07:42 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 7 Feb 2018 10:07:42 -0600 Subject: [openstack-dev] [requirements][release] FFE for constraints update for python-tackerclient bug-fix release In-Reply-To: References: Message-ID: <20180207160742.ejnudolq774x6zu6@gentoo.org> On 18-02-07 23:01:25, Yongsheng Gong wrote: > Hi, > > The tacker client has na initial queens release which does not have right reno notes. Recently I have added some reno patches. > And also the team wants a feature to land in the queens release. > > I have requested the newer release at https://review.openstack.org/#/c/541638/ > > The new feature is at https://review.openstack.org/#/c/541631/ > > Please see how we can get it released? > It should have time to get in for the freeze, the question I have is 'What in openstack is broken if we update upper-contraints after the freeze instead of before?' A follow up question is 'does this need a global-requirements.txt bump?' +2 from me on the UC bump though -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From chris.friesen at windriver.com Wed Feb 7 16:20:02 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Feb 2018 10:20:02 -0600 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> Message-ID: <5A7B2732.8040101@windriver.com> On 02/05/2018 06:33 PM, Jay Pipes wrote: > It does seem to me, however, that if the intention is *not* to get into the > multi-cloud orchestration game, that a simpler solution to this multi-region > OpenStack deployment use case would be to simply have a global Glance and > Keystone infrastructure that can seamlessly scale to multiple regions. > > That way, there'd be no need for replicating anything. One use-case I've seen for this sort of thing is someone that has multiple geographically-separate clouds, and maybe they want to run the same heat stack in all of them. So they can use global glance/keystone, but they need to ensure that they have the right flavor(s) available in all the clouds. This needs to be done by the admin user, so it can't be done as part of the normal user's heat stack. Something like "create a keypair in each of the clouds with the same public key and same name" could be done by the end user with some coding, but it's convenient to have a tool to do it for you. Chris From prometheanfire at gentoo.org Wed Feb 7 16:23:57 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 7 Feb 2018 10:23:57 -0600 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release Message-ID: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> Hi all, it looks like some of your projects may need to cut a queens branch/release. Is there anything we can do to move it along? The following is the list I'm working off of (will be updated as projects release) https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f As of right now it's as follows. # Projects without team or release model could not be found in openstack/releases for queens openstack/almanach openstack/compute-hyperv openstack/ekko openstack/gce-api openstack/glare openstack/ironic-staging-drivers openstack/kosmos openstack/mixmatch openstack/mogan openstack/nemesis openstack/networking-dpm openstack/networking-l2gw openstack/networking-powervm openstack/nova-dpm openstack/nova-lxd openstack/nova-powervm openstack/os-xenapi openstack/python-cratonclient openstack/python-glareclient openstack/python-kingbirdclient openstack/python-moganclient openstack/python-oneviewclient openstack/python-valenceclient openstack/swauth openstack/tap-as-a-service openstack/trio2o openstack/valence openstack/vmware-nsx openstack/vmware-nsxlib # Projects missing a release/branch for queens openstackclient OpenStackClient anchor Security ec2-api ec2-api django_openstack_auth horizon horizon-cisco-ui horizon bifrost ironic ironic-python-agent-builder ironic magnum magnum magnum-ui magnum manila-image-elements manila masakari masakari masakari-monitors masakari python-masakariclient masakari os-service-types shade tacker tacker # I think this one is released tacker-horizon tacker # but not this one # Repos with type: horizon-plugin (typically release a little later) manila-ui manila neutron-vpnaas-dashboard neutron senlin-dashboard senlin solum-dashboard solum watcher-dashboard watcher # Repos with type: other heat-agents heat ironic-python-agent ironic kuryr-kubernetes kuryr neutron-vpnaas neutron networking-hyperv winstackers # Repos with type: service ironic ironic swift swift tricircle tricircle vitrage vitrage watcher watcher -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lhinds at redhat.com Wed Feb 7 16:33:52 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 7 Feb 2018 16:33:52 +0000 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> Message-ID: On Wed, Feb 7, 2018 at 4:23 PM, Matthew Thode wrote: > Hi all, > > it looks like some of your projects may need to cut a queens > branch/release. Is there anything we can do to move it along? > > The following is the list I'm working off of (will be updated as > projects release) > https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f > > As of right now it's as follows. >From what I know anchor (security) has no maintainers / cores now, so I guess it would make sense to perhaps archive (I will follow this through outside this thread), so for now there is no need to tag a queens branch / release. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Feb 7 16:48:25 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 7 Feb 2018 17:48:25 +0100 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-02-07 In-Reply-To: <20180207151143.ee78f1c849f84e925bba04b9@redhat.com> References: <20180207151143.ee78f1c849f84e925bba04b9@redhat.com> Message-ID: <20180207174825.2a7b90b01e8080b8894e0a5a@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:00:38 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-02-07-16.00.log.html . Meeting summary --------------- * retired internal -core mailing list discussions on new core nominations and started discussing core team changes on -dev (pkovar, 16:07:28) * Add deprecation badges to docs.o.o (pkovar, 16:08:36) * LINK: https://review.openstack.org/#/c/530142/ (pkovar, 16:08:41) * Under review, further changes required in openstackdocstheme (pkovar, 16:08:45) * Rocky PTG (pkovar, 16:12:12) * Planning etherpad for docs+i18n available (pkovar, 16:12:17) * LINK: https://etherpad.openstack.org/p/docs-i18n-ptg-rocky (pkovar, 16:12:22) * Sign up and tell us your ideas on what to discuss in the docs room (pkovar, 16:12:26) * Vancouver Summit (pkovar, 16:13:45) * Planning to have a shared 10+10 mins project update slot with i18n (pkovar, 16:14:00) * Looking for interested (co-)speakers (pkovar, 16:14:34) * Bug Triage Team (pkovar, 16:17:17) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:17:23) * Open discussion (pkovar, 16:18:33) Meeting ended at 16:22:51 UTC. People present (lines said) --------------------------- * pkovar (33) * openstack (3) * jamesmcarthur (1) Generated by `MeetBot`_ 0.1.4 From prometheanfire at gentoo.org Wed Feb 7 16:49:24 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 7 Feb 2018 10:49:24 -0600 Subject: [openstack-dev] [heat][horizon][ironic][neutron][swift][tricircle][vitrage][watcher] Help needed for your release In-Reply-To: References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> Message-ID: <20180207164924.rgtmyqz5yudy5xmp@gentoo.org> On 18-02-07 16:33:52, Luke Hinds wrote: > On Wed, Feb 7, 2018 at 4:23 PM, Matthew Thode > wrote: > > > Hi all, > > > > it looks like some of your projects may need to cut a queens > > branch/release. Is there anything we can do to move it along? > > > > The following is the list I'm working off of (will be updated as > > projects release) > > https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f > > > > As of right now it's as follows. > > > From what I know anchor (security) has no maintainers / cores now, so I > guess it would make sense to perhaps archive (I will follow this through > outside this thread), so for now there is no need to tag a queens branch / > release. Ya, a bunch of those are maintainerless, the ones of primary concern are those managed by ironic, swift, tricircle, vitrage, watcher, heat and neutron -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ed at leafe.com Wed Feb 7 17:08:49 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 7 Feb 2018 11:08:49 -0600 Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url In-Reply-To: References: Message-ID: <3DD31BF3-055F-4BEB-9B01-AB92DBA0C6FD@leafe.com> On Feb 2, 2018, at 2:11 AM, Duncan Thomas wrote: > > So I guess my question here is why is being RESTful good? Sure it's (very, very loosely) a standard, but what are the actual advantages? Standards come and go, what we want most of all is a good quality, easy to use API. REST is HTTP. I don’t think that that is a “loose” standard by any measure. > I'm not saying that going RESTful is wrong, but I don't see much discussion about what the advantages are, only about how close we are to implementing it. Here’s a quick summary of the advantages, courtesy of SO: https://stackoverflow.com/questions/5320003/why-we-should-use-rest REST is the standard for OpenStack APIs. Our job in the API-SIG is to help all projects develop their APIs to be as consistent as possible without using a top-down, heavy-handed approach. That’s why we included a suggestion for how to make your RPC-ish API consistent with other projects that also use an RPC-like approach to parts of their API. -- Ed Leafe From pratapagoutham at gmail.com Wed Feb 7 17:09:49 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 7 Feb 2018 22:39:49 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> Message-ID: On Tue, Feb 6, 2018 at 6:03 AM, Jay Pipes wrote: > Goutham, comments inline... > > Also, FYI, using HTML email with different color fonts to indicate > different people talking is not particularly mailing list-friendly. For > reasons why, just check out your last post: > > http://lists.openstack.org/pipermail/openstack-dev/2018-Janu > ary/126842.html > > You can't tell who is saying what in the mailing list post... > > Much better to use non-HTML email and demarcate responses with the > traditional > marker. :) > > OK, comments inline below. > > On 01/31/2018 01:17 PM, Goutham Pratapa wrote: > >> Hi Jay, >> >> Thanks for the questions.. :) >> >> What precisely do you mean by "resources" above ?? >> >> Resources as-in resources required to boot-up a vm (Keypair, Image, >> Flavors ) >> > > Gotcha. Thanks for the answer. > > Also, by "syncing", do you mean "replicating"? The reason I ask is because >> in the case of, say, VM "resources", you can't "sync" a VM across regions. >> You can replicate its bootable image, but you can't "sync" a VM's state >> across multiple OpenStack deployments. >> >> Yes as you said syncing as-in replicating only. >> > > Gotcha. You could, of course, actually use synchronous (or semi-sync) > replication for various databases, including Glance and Keystone's > identity/assignment information, but yes, async replication is just as good. > > and yes we cannot sync vm's across regions but our idea is to >> sync/replicate all the parameters required to boot a vm >> > > OK, sounds good. > > (viz. *image, keypair, flavor*) which are originally there in the source >> region to the target regions in a single-go. >> > > Gotcha. > > Some questions on scope that piqued my interest while reading your > response... > > Is Kingbird predominantly designed to be the multi-region orchestrator for > OpenStack deployments that are all owned/operated by the same deployer? Or > does Kingbird have intentions of providing glue services between multiple > fully-independent OpenStack deployments (possibly operated by different > deployers)? > > Further, does Kingbird intend to get into the multi-cloud (as in AWS, > OpenStack, Azure, etc) orchestration game? >> >> > For now Kingbird is designed for openstack deployments that are all >> owned by the same deployer and yes we would like to get into multi-cloud >> orchestration dont know how ?? But the idea is there. (If you can please >> guide us then may be we can acheive this :) ) > > > We have to see how far we can adhere between different >> multiple-openstack deployments > > I'm curious what you mean by "resource management". Could you elaborate a >> bit on this? >> >> Resource management as-in managing the resources i.e say a user has a >> glance image(*qcow2 or ami format*) or >> say flavor(*works only if admin*) with some properties or keypair present >> in one source regionand he wants the same image or >> same flavor with same properties or the same keypair in another set of >> regions user may have to recreate them in all target regions. >> >> But with the help of kingbird you can do all the operations in a single >> go. >> >> --> If user wants to sync a resource of type keypair he can replicate the >> keypair into multiple target regions in single go (similarly glance images >> and flavors ) >> --> If user wants different type of resource( keypair,image and flavor) >> in a single go then user can give a yaml file as input and kingbird >> replicates all resources in all target regions >> > > OK, I understand your use case here, thanks. > > It does seem to me, however, that if the intention is *not* to get into > the multi-cloud orchestration game, that a simpler solution to this > multi-region OpenStack deployment use case would be to simply have a global > Glance and Keystone infrastructure that can seamlessly scale to multiple > regions. > Frankly we never tried this. we will have to try this. > > That way, there'd be no need for replicating anything. > > I suppose what I'm recommending it that instead of the concept of a region > (or availability zone in Nova for that matter) being a mostly-configuration > option thing, that the OpenStack contributor community actually work to > make regions (the concept that Keystone labels a region; which is just a > grouping of service endpoints) the one and only concept of a user-facing > "partition" throughout OpenStack. > > That way we would have OpenStack services like Glance, Nova, Cinder, > Neutron, etc just *natively* understand which region they are in and how/if > they can communicate with other regions. > > Sometimes it seems we (as a community) go through lots of hoops working > around fundamental architectural problems in OpenStack instead of just > fixing those problems to begin with. See: Nova cellsv1 (and some of > cellsv2), Keystone federation, the lack of a real availability zone concept > anywhere, Nova shelve/unshelve (partly developed because VMs and IPs were > too closely coupled at the time), the list goes on and on... > > Anyway, mostly just rambling/ranting... just food for thought. > > Yes :) thanks for your suggestions and ideas.. this is good way forward > for our team. > > Best, > -jay > > Thanks >> Goutham. >> >> >> On Wed, Jan 31, 2018 at 9:25 PM, Jay Pipes > jaypipes at gmail.com>> wrote: >> >> On 01/31/2018 01:49 AM, Goutham Pratapa wrote: >> >> *Kingbird (The Multi Region orchestrator):* >> >> We are proud to announce kingbird is not only a centralized >> quota and resource-manager but also a Multi-region Orchestrator. >> >> *Use-cases covered: >> >> *- Admin can synchronize and periodically balance quotas across >> regions and can have a global view of quotas of all the tenants >> across regions. >> - A user can sync a resource or a group of resources from one >> region to other in a single go >> >> >> What precisely do you mean by "resources" above? >> >> Also, by "syncing", do you mean "replicating"? The reason I ask is >> because in the case of, say, VM "resources", you can't "sync" a VM >> across regions. You can replicate its bootable image, but you can't >> "sync" a VM's state across multiple OpenStack deployments. >> >> A user can sync multiple key-pairs, images, and flavors from >> one region to other, ( Flavor can be synced only by admin) >> >> - A user must have complete tempest test-coverage for all the >> scenarios/services rendered by kingbird. >> >> - Horizon plugin so that user can access/view global limits. >> >> * Our Road-map:* >> >> -- Automation scripts for kingbird in >> -ansible, >> -salt >> -puppet. >> -- Add SSL support to kingbird >> -- Resource management in Kingbird-dashboard. >> >> >> I'm curious what you mean by "resource management". Could you >> elaborate a bit on this? >> >> Thanks, >> -jay >> >> -- Kingbird in a docker >> -- Add Kingbird into Kolla. >> >> We are looking out for*_contributors and ideas_* which can >> enhance Kingbird and make kingbird a one-stop solution for all >> multi-region problems >> >> >> >> *_Stable Branches :_ >> * >> * >> Kingbird-server: >> https://github.com/openstack/kingbird/tree/stable/queens >> >> > > >> * >> *Python-Kingbird-client (0.2.1): >> https://github.com/openstack/python-kingbirdclient/tree/0.2.1 >> >> > > >> * >> >> I would like to Thank all the people who have helped us in >> achieving this milestone and guided us all throughout this >> Journey :) >> >> Thanks >> Goutham Pratapa >> PTL >> OpenStack-Kingbird. >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > subscribe> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> Cheers !!! >> Goutham Pratapa >> > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Feb 7 17:11:33 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 7 Feb 2018 18:11:33 +0100 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> Message-ID: <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> Hi, On 02/07/2018 05:23 PM, Matthew Thode wrote: > Hi all, > > it looks like some of your projects may need to cut a queens > branch/release. Is there anything we can do to move it along? Review patches? Make the gate work faster? :) The Ironic team is working on it, we expect stable/queens requests to come later today or early tomorrow. Two more comments inline. > > The following is the list I'm working off of (will be updated as > projects release) > https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f > > As of right now it's as follows. > > # Projects without team or release model could not be found in openstack/releases for queens > openstack/almanach > openstack/compute-hyperv > openstack/ekko > openstack/gce-api > openstack/glare > openstack/ironic-staging-drivers I don't think non-official projects get tracked via openstack/releases. > openstack/kosmos > openstack/mixmatch > openstack/mogan > openstack/nemesis > openstack/networking-dpm > openstack/networking-l2gw > openstack/networking-powervm > openstack/nova-dpm > openstack/nova-lxd > openstack/nova-powervm > openstack/os-xenapi > openstack/python-cratonclient > openstack/python-glareclient > openstack/python-kingbirdclient > openstack/python-moganclient > openstack/python-oneviewclient > openstack/python-valenceclient > openstack/swauth > openstack/tap-as-a-service > openstack/trio2o > openstack/valence > openstack/vmware-nsx > openstack/vmware-nsxlib > > # Projects missing a release/branch for queens > openstackclient OpenStackClient > anchor Security > ec2-api ec2-api > django_openstack_auth horizon > horizon-cisco-ui horizon > bifrost ironic > ironic-python-agent-builder ironic This one is empty and will not be released for Queens. > magnum magnum > magnum-ui magnum > manila-image-elements manila > masakari masakari > masakari-monitors masakari > python-masakariclient masakari > os-service-types shade > tacker tacker # I think this one is released > tacker-horizon tacker # but not this one > > # Repos with type: horizon-plugin (typically release a little later) > manila-ui manila > neutron-vpnaas-dashboard neutron > senlin-dashboard senlin > solum-dashboard solum > watcher-dashboard watcher > > # Repos with type: other > heat-agents heat > ironic-python-agent ironic > kuryr-kubernetes kuryr > neutron-vpnaas neutron > networking-hyperv winstackers > > # Repos with type: service > ironic ironic > swift swift > tricircle tricircle > vitrage vitrage > watcher watcher > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ifat.afek at nokia.com Wed Feb 7 17:15:39 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 7 Feb 2018 17:15:39 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" Cc: Ciprian Barbu Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Wed Feb 7 17:16:37 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 7 Feb 2018 17:16:37 +0000 Subject: [openstack-dev] [release][ptl] Missing and old intermediary projects In-Reply-To: <20180202223433.GA20855@sm-xps> References: <20180202223433.GA20855@sm-xps> Message-ID: Hi Sean, thanks for the reminder. The changes for Monasca release are under way [1, 2]. Best greetings Witek [1] https://review.openstack.org/541767 [2] https://review.openstack.org/541776 > -----Original Message----- > From: Sean McGinnis [mailto:sean.mcginnis at gmx.com] > Sent: Freitag, 2. Februar 2018 23:35 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] [release][ptl] Missing and old intermediary projects > > Hey all, > > Sending this kind of late on a Friday, but I will also include this information in > the weekly countdown email. Just hoping to increase the chances of it > getting seen. > > One of our release models is cycle-with-intermediary. With this type of > project, the projects are able to do full releases at any time, with the > commitment "to produce a release near the end of the 6-month > development cycle to be used with projects using the other cycle-based > release models". > > Ideally, this means these projects will have one or more releases during the > development cycle, and will have a final release leading up to the RC1 > deadline. This "final" release is then used to cut a stable/queens branch for > the project. > > Well, the RC1 milestone is coming up next Thursday, and we have a few > projects following this release model that have not done any release yet for > Queens. > There are other projects that have done a Queens release, but it has been > awhile since those were done, so we're not really sure if they are intended > to be the last official release for Queens. > > For those without a release - if nothing is done in time - the release team will > need to force a release off of HEAD to be able to create the stable/queens > branch. > > For those with old Queens releases - unless we hear otherwise, we will need > to use the point of that last release to cut stable/queens for those repos. > > The release team would rather not be the ones to decide when projects are > released, nor be the ones to decide what becomes stable/queens for these > projects. Please make every effort to release and/or branch these projects > before next Thurday's deadline. > > The projects with existing but old Queens releases are: > > - swift > - storlets > - monasca / monasca-log-api > > The projects that have not yet done a Queens intermediary release are: > > - aodh, ceilometer, panko > - heat-translator > - ironic-ui > - monasca-kibana-plugin, monasca-thresh > - murano-agent > - patrole > - tacker-horizon > - tripleo-quickstart > - zun, zun-ui > > For some of these, it might make sense to switch to a different release > model. > Some of the more mature ones may be better as "independent". > > If you have any questions or problems that the release team can help with, > please come see us in the #openstack-release channel. > > Thanks, > > Sean (smcginnis) > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ifat.afek at nokia.com Wed Feb 7 17:20:01 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 7 Feb 2018 17:20:01 +0000 Subject: [openstack-dev] [heat][horizon][ironic][neutron][swift][tricircle][vitrage][watcher] Help needed for your release In-Reply-To: <20180207164924.rgtmyqz5yudy5xmp@gentoo.org> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <20180207164924.rgtmyqz5yudy5xmp@gentoo.org> Message-ID: Hi, I will request to a create stable/queens branch for Vitrage later today. Thanks, Ifat. On 07/02/2018, 18:49, "Matthew Thode" wrote: On 18-02-07 16:33:52, Luke Hinds wrote: > On Wed, Feb 7, 2018 at 4:23 PM, Matthew Thode > wrote: > > > Hi all, > > > > it looks like some of your projects may need to cut a queens > > branch/release. Is there anything we can do to move it along? > > > > The following is the list I'm working off of (will be updated as > > projects release) > > https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f > > > > As of right now it's as follows. > > > From what I know anchor (security) has no maintainers / cores now, so I > guess it would make sense to perhaps archive (I will follow this through > outside this thread), so for now there is no need to tag a queens branch / > release. Ya, a bunch of those are maintainerless, the ones of primary concern are those managed by ironic, swift, tricircle, vitrage, watcher, heat and neutron -- Matthew Thode (prometheanfire) From jpena at redhat.com Wed Feb 7 17:20:52 2018 From: jpena at redhat.com (Javier Pena) Date: Wed, 7 Feb 2018 12:20:52 -0500 (EST) Subject: [openstack-dev] [packaging-rpm] PTL candidacy In-Reply-To: <1206385060.1050249.1518024037285.JavaMail.zimbra@redhat.com> Message-ID: <524690626.1050328.1518024052023.JavaMail.zimbra@redhat.com> Hello fellow packagers! I would like to announce my candidacy to be the PTL for the Packaging Rpm project during the Rocky development cycle. During the last cycles, the project has become a great collaboration space for people working on RPM packages for OpenStack products. We have a number of great tools that help us in different areas, even beyond the boundaries of RPM-based distributions [1], and a wide, up-to-date package set. For the Rocky cycle, my focus would be on: * Fixing 3rd party CI annoyances: we are often getting hit by known issues in the 3rd party CI systems. Let's try to work on those known issues and reduce their occurrence to a minimum, so we can spend more time on reviews and less time troubleshooting our CI. * Work on getting the packages created by the project tested by some installer project. This would greatly help us in improving quality on those small details that only get caught during real life tests. * Expanding OpenStack distribution usage of the artifacts generated by the Packaging Rpm project. That, of course, would be in addition to our common goals: expanding our contributor base, improving collaboration between RPM distributions, and keeping a high quality set of packages. Thanks for reading, Javier [1] - https://github.com/openstack/pymod2pkg/blob/master/pymod2pkg/__init__.py#L294-L316 From pratapagoutham at gmail.com Wed Feb 7 17:24:40 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 7 Feb 2018 22:54:40 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> References: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> Message-ID: Hi Zane, Thanks for writing to us. We have a doubt and I have mentioned in the inline comments can you please help us in that regard. On Tue, Feb 6, 2018 at 11:53 PM, Zane Bitter wrote: > On 31/01/18 01:49, Goutham Pratapa wrote: > >> *Kingbird (The Multi Region orchestrator):* >> >> We are proud to announce kingbird is not only a centralized quota and >> resource-manager but also a Multi-region Orchestrator. >> > > I'd invite you to consider coming up with a different short description > for the project, because this one reads ambiguously. It can be interpreted > as either an orchestrator that works across multiple regions, or a tool > that 'orchestrates' multiple regions for some new definition of > 'orchestration' (and I regret that we already have more than one). I gather > you mean the latter; the former already exists in OpenStack. >Yes as you said it can be interpreted as a tool that can orchestrate > multiple-regions. Just to be sure does openstack already has project which can replicate the > resources and orchestrate??? why because In coming cycle our idea is that a > user just gives a VM-ID or Vm-name and we sync all the resources with which > the vm is actually created ofcourse we cant have the same network in > target-region so we may need the network-id or port-id from the target > region from user so that kingbird will boot up the requested vm in the > target region(s). > > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.mellado.es at ieee.org Wed Feb 7 17:30:18 2018 From: daniel.mellado.es at ieee.org (Daniel Mellado) Date: Wed, 7 Feb 2018 18:30:18 +0100 Subject: [openstack-dev] [Kuryr] Rocky Kuryr PTL candidacy Message-ID: <0fa4c636-e567-a210-6ef8-c0e196342a64@ieee.org> Dear all, I'd like to announce my candidacy for PTL of the Kuryr project for the Rocky cycle. After being part of the Kuryr community for a while and running the Upstream meetings for a while I'd be delighted to continue the Toni's great work as PTL for the next six months as he won't be running for it this cycle. I do intend to keep acting as an interface for the cross-project sessions and try to use this cycle to grow on requirements and stability. Within the Rocky cycle there's a few topics I'd like to care about: - Further enhance our testing coverage and SDN matrix. - Kubernetess network policies. - SRIOV support in Kuryr-Kubernetes. - Multi pool driver. - Further improve debugging and instrospection tools using Kubernetes plugins. Also, I'll try to support the visibility within the community and be a mediator for growing the project scope. Thanks a lot! Daniel Mellado (dmellado) https://review.openstack.org/#/c/540542/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From pratapagoutham at gmail.com Wed Feb 7 17:34:46 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 7 Feb 2018 23:04:46 +0530 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <5A7B2732.8040101@windriver.com> References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> <5A7B2732.8040101@windriver.com> Message-ID: Hi Chris, Thanks for writing to us. Our idea is just the same. and we are working on how to do it :) Thanks for the use-case :) On Wed, Feb 7, 2018 at 9:50 PM, Chris Friesen wrote: > On 02/05/2018 06:33 PM, Jay Pipes wrote: > > It does seem to me, however, that if the intention is *not* to get into the >> multi-cloud orchestration game, that a simpler solution to this >> multi-region >> OpenStack deployment use case would be to simply have a global Glance and >> Keystone infrastructure that can seamlessly scale to multiple regions. >> >> That way, there'd be no need for replicating anything. >> > > One use-case I've seen for this sort of thing is someone that has multiple > geographically-separate clouds, and maybe they want to run the same heat > stack in all of them. > > So they can use global glance/keystone, but they need to ensure that they > have the right flavor(s) available in all the clouds. This needs to be > done by the admin user, so it can't be done as part of the normal user's > heat stack. > Something like "create a keypair in each of the clouds with the same > public key and same name" could be done by the end user with some coding, > but it's convenient to have a tool to do it for you. > > Chris > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Feb 7 17:40:10 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 7 Feb 2018 09:40:10 -0800 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: I am interested in contributing to this discussion. Michael On Wed, Feb 7, 2018 at 3:42 AM, Thierry Carrez wrote: > Hi everyone, > > I was wondering if anyone would be interested in brainstorming the > question of how to better align our release cycle and stable branch > maintenance with the OpenStack downstream consumption models. That > includes discussing the place of the distributions, the need for LTS, > and where does the open source upstream project stop. > > I have hesitated to propose it earlier, as it sounds like a topic that > should be discussed with the wider community at the Forum. And it will, > but it feels like this needs a deeper pre-discussion in a productive > setting, and tonyb and eumel8 have been proposing that topic on the > missing topics etherpad[1], so we might as well take some time at the > PTG to cover that. > > Would anyone be interested in such a discussion ? It would be scheduled > on the Tuesday. How much time would we need ? I was thinking we could > use only Tuesday afternoon. > > [1] https://etherpad.openstack.org/p/PTG-Dublin-missing-topics > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rbowen at redhat.com Wed Feb 7 17:46:55 2018 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 7 Feb 2018 12:46:55 -0500 Subject: [openstack-dev] [all] [ptg] Reminder: Sign up for project interviews at PTG Message-ID: A reminder - the PTG is now less than 3 weeks out. As you plan your schedule, please set aside time for a project/team interview, and sign up at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 That document also contains a description of what kind you will want to prepare for your interview, and some examples of past interviews, which you can see at http://youtube.com/RDOCommunity Thanks! -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From tpb at dyncloud.net Wed Feb 7 17:49:15 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 7 Feb 2018 12:49:15 -0500 Subject: [openstack-dev] [manila] [ptl] announcing PTL candidacy for manila Message-ID: <20180207174915.bxd3ouukvsoyvv75@barron.net> Friends, Stackers, Community, I write to announce my candidacy for the Manila PTL position for the Rocky cycle. I've worked in in OpenStack since Juno and actively in Manila since Mitaka or so. I've had more than one employer in that time and think it's fair to say that I have a reputation for working upstream in the interests of the community. I am one of the more active Manila core reviewers, care about welcoming and engaging new contributors, encouraging participation, and at the same time preserving code quality and the integrity of the project. Ben Swartzlander is moving on to do other cool stuff, including work as a Manila contributor. I expect that I share a rather general perception that no one can fill his shoes as PTL. That said, I do think that if we work together to make Manila shine we can make it truly awesome! Some areas I'd like us to work on in the near future include: * python 3 support. Upstream python 2 support is going away in 2020 if I understand correctly and between now and then distros are likely to drop support for it. We need to do our part to get manila working with python 3 in devstack, and also with python 3 when deployed at scale via frameworks like kolla, charms, and TripleO. * performance and scale. We learned recently that Huawei public cloud runs manila with thousands of shares and that CERN is planning to move from 83 shares to over 2000 shares. Let's get more success stories with more back ends, build a common understanding of any bottlenecks, and work plans to address these. * side-by-side deployment with kubernetes and other clouds. Whether running kubernetes on OpenStack, deploying OpenStack services with kubernetes, or building standalone software defined storage with manila and cinder without other OpenStack services, this is a space where we need to explore and be actively engage. * production quality open source software defined back ends. Manila has great proprietary storage back ends, but shouldn't we have open source back ends that work reliably at scale as well? We could make the generic driver great in this regard, or build out distributed file system back ends like cephfs with good data path HA and tenant separation. There are perhaps other alternatives that haven't surfaced yet. There's a lot of room here for innovation and certainly demand from cloud operators on this front. * vendor participation: we have a mix of vendors introducing new back ends, sustained participation from vendors with existing back ends, and some back ends that no longer have attention from their vendors even though -- working with a distro -- I see customers indicating that they *want* to use those back ends if only the vendors were engaged! Let's welcome new vendors with open arms and help all understand the mutual benefit of remaining involved with manila as the community evolves and grows. Those are some of my ideas. I offer them as much as anything to stimulate others working on manila to come to PTG and the Rocky cycle with their own initiatives. Also, if you haven't been working in manila and any of the above seems interesting (or just nuts) come on over! Manila is a great place to contribute and innovate! Thanks for listening. -- Tom Barron (tbarron -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed Feb 7 18:57:18 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 7 Feb 2018 12:57:18 -0600 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: <20180207185718.3obcwnan5pnxp7i3@gentoo.org> On 18-02-07 12:42:05, Thierry Carrez wrote: > Hi everyone, > > I was wondering if anyone would be interested in brainstorming the > question of how to better align our release cycle and stable branch > maintenance with the OpenStack downstream consumption models. That > includes discussing the place of the distributions, the need for LTS, > and where does the open source upstream project stop. > > I have hesitated to propose it earlier, as it sounds like a topic that > should be discussed with the wider community at the Forum. And it will, > but it feels like this needs a deeper pre-discussion in a productive > setting, and tonyb and eumel8 have been proposing that topic on the > missing topics etherpad[1], so we might as well take some time at the > PTG to cover that. > > Would anyone be interested in such a discussion ? It would be scheduled > on the Tuesday. How much time would we need ? I was thinking we could > use only Tuesday afternoon. > > [1] https://etherpad.openstack.org/p/PTG-Dublin-missing-topics > I'll make myself available with both my requirements and distro packaging hats. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From MM9745 at att.com Wed Feb 7 19:31:15 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Wed, 7 Feb 2018 19:31:15 +0000 Subject: [openstack-dev] [openstack-helm] OpenStack-Helm Office Hours Message-ID: <7C64A75C21BB8D43BD75BB18635E4D89654E130F@MOSTLS1MSGUSRFF.ITServices.sbc.com> Team, The OpenStack-Helm team will begin holding weekly Office Hours in IRC, with a goal of knowledge sharing and Q&A between the more experienced and the newer team members. The project cores have committed to giving attention to at least one office hour during the week, and it should be a great way to ramp up on the project. All are welcome to attend! We're starting with three different hours per week, and we're keeping them up to date in our wiki [1]. Hope to see some new faces there. [1] https://wiki.openstack.org/wiki/Openstack-helm Thanks, Matt McEuen From andr.kurilin at gmail.com Wed Feb 7 19:39:02 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 7 Feb 2018 21:39:02 +0200 Subject: [openstack-dev] [heat][rally] What should we do with legacy-rally-dsvm-fakevirt-heat In-Reply-To: References: Message-ID: Hi Rico and stackers, Thanks for raising this topic. Short answer: please leave it as is for now. Rally team will work on ZuulV3 jobs soon. Detailed: We are planning to make some big changes in our architecture which includes splitting the main repo into a separate repository for a framework and a separate repository for all OpenStack plugins. To minimize work across all projects which have Rally job, we decided to pause working on ZuulV3 until the split will be finished. As for estimates, I'm planning to make the final release before splitting today or tomorrow. As soon as new release will be ready, we will start working on splitting and CI as well. Thanks for the patient and for using Rally! 2018-02-07 10:13 GMT+02:00 Rico Lin : > Hi heat and rally team > > Right now, in heat's zuul jobs. We still got one legacy job to change > `legacy-rally-dsvm-fakevirt-heat` [1] which I already put a patch out > here [2], but after discussion with infra team, it seems best if we can > define this in rally, and reference it in heat. > So my question to rally team for all these will be, do we still need this > job? and how you guys think about if we put that into rally? > > [1] https://github.com/openstack-infra/project-config/blob/master/zuul.d/ > projects.yaml#L6979 > [2] https://review.openstack.org/#/c/509141 > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Wed Feb 7 19:47:57 2018 From: chris at openstack.org (Chris Hoge) Date: Wed, 7 Feb 2018 11:47:57 -0800 Subject: [openstack-dev] [k8s] openstack-sig-k8s planning for Dublin Message-ID: <5D63CD65-D547-4733-9B12-01C271E74396@openstack.org> sig-k8s has a block of room time put aside for the Dublin PTG. I’ve set up a planning etherpad for work and discussion topics[1]. High priority items include: * openstack provider breakout [2] * provider testing * documentation updates Please feel free to add relevant agenda items, links, and discussion topics. We will begin some pre-planning today at the k8s-sig-openstack meeting, taking place at 00 UTC Thursday [3] (Wednesday afternoon/evening for North America, morning for Asia/Pacific region). Thanks, Chris [1] https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg [2] https://github.com/dims/openstack-cloud-controller-manager [3] https://github.com/kubernetes/community/tree/master/sig-openstack#meetings From tony at bakeyournoodle.com Wed Feb 7 20:51:52 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Feb 2018 07:51:52 +1100 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: <20180207205151.GI23143@thor.bakeyournoodle.com> On Wed, Feb 07, 2018 at 12:42:05PM +0100, Thierry Carrez wrote: > Would anyone be interested in such a discussion ? It would be scheduled > on the Tuesday. How much time would we need ? I was thinking we could > use only Tuesday afternoon. +1 Sounds good to me. I'll be there :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From e0ne at e0ne.info Wed Feb 7 20:59:05 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 7 Feb 2018 22:59:05 +0200 Subject: [openstack-dev] [horizon][ptl] Rocky PTL Candidacy Message-ID: Hello Team, I would like to announce my candidacy for PTL of Horizon for Rocky release. I use Horizon since I begin to work with OpenStack in Diablo timeframe. I wasn't active contributor before Pike release, nevertheless, I can see how both Horizon project and community changed over the times. I became Core Reviewer in Queens and worked mostly on bug-fixing and other improvements. Being a PTL is a challenging task especially for such project as Horizon. We should be close both to the other OpenStack components and provide a great user experience for cloud users and operators. As a PTL I will focus on the following areas: * Continue to work on Horizon stabilization and improvements to bring great UX for large-scale deployments. * Finish work on mox to mock migrations in unit tests. * Improve our integrational tests. * Help everybody to contribute to Horizon via reviews, features implementations and bugfixes. I know, that being a PTL is a hard job, but we've got a good team and we'll do our best to make Horizon a bit better during the next cycle. Thank you, Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Feb 7 21:18:37 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Feb 2018 08:18:37 +1100 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> Message-ID: <20180207211822.GJ23143@thor.bakeyournoodle.com> On Wed, Feb 07, 2018 at 06:11:33PM +0100, Dmitry Tantsur wrote: > Hi, > > On 02/07/2018 05:23 PM, Matthew Thode wrote: > > Hi all, > > > > it looks like some of your projects may need to cut a queens > > branch/release. Is there anything we can do to move it along? > > Review patches? Make the gate work faster? :) > > The Ironic team is working on it, we expect stable/queens requests to come > later today or early tomorrow. Two more comments inline. > > > > > The following is the list I'm working off of (will be updated as > > projects release) > > https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f > > > > As of right now it's as follows. > > > > # Projects without team or release model could not be found in openstack/releases for queens > > openstack/almanach > > openstack/compute-hyperv > > openstack/ekko > > openstack/gce-api > > openstack/glare > > openstack/ironic-staging-drivers > > I don't think non-official projects get tracked via openstack/releases. It's true they are not tracked in openstack/realeases but as they receive requirements syncs via the bot we need to track them in requirements. I guess the bottom line is until/if ironic-staging-drivers gets a queens branch you may need to be careful with merging any requirements updates and possibly may request am ACK from the requirements team. Of course if ironic-staging-drivers doesn't have any relationship with a series those statements are probably wrong. > > openstack/kosmos > > openstack/mixmatch > > openstack/mogan > > openstack/nemesis > > openstack/networking-dpm > > openstack/networking-l2gw > > openstack/networking-powervm > > openstack/nova-dpm > > openstack/nova-lxd > > openstack/nova-powervm > > openstack/os-xenapi > > openstack/python-cratonclient > > openstack/python-glareclient > > openstack/python-kingbirdclient > > openstack/python-moganclient > > openstack/python-oneviewclient > > openstack/python-valenceclient > > openstack/swauth > > openstack/tap-as-a-service > > openstack/trio2o > > openstack/valence > > openstack/vmware-nsx > > openstack/vmware-nsxlib > > > > # Projects missing a release/branch for queens > > openstackclient OpenStackClient > > anchor Security > > ec2-api ec2-api > > django_openstack_auth horizon > > horizon-cisco-ui horizon > > bifrost ironic > > ironic-python-agent-builder ironic > > This one is empty and will not be released for Queens. Okay It's safe to ignore then ;P We should probably remove it from projects.txt if it really is empty I'll propose that. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Wed Feb 7 21:31:03 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Feb 2018 08:31:03 +1100 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <20180207211822.GJ23143@thor.bakeyournoodle.com> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> Message-ID: <20180207213102.GK23143@thor.bakeyournoodle.com> On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: > Okay It's safe to ignore then ;P We should probably remove it from > projects.txt if it really is empty I'll propose that. Oh my bad, ironic-python-agent-builder was included as it's included as an ironic project[1] NOT because it;s listed in projects.txt. Given that it's clearly not for me to remove anything. Having said that if the project hasn't had any updates at all since it's creation in July 2017 perhaps it's no longer needed and could be removed? Yours Tony. [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n1539 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From e0ne at e0ne.info Wed Feb 7 22:08:40 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 8 Feb 2018 00:08:40 +0200 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <20180207213102.GK23143@thor.bakeyournoodle.com> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> <20180207213102.GK23143@thor.bakeyournoodle.com> Message-ID: Hi Matt, As discussed earlier today at the Horizon's meeting [2], we're not going to release horizon-cisco-ui and django_openstack_auth because of projects retirement [3] and [4]. [2] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-02- 07-20.02.html [3] https://review.openstack.org/#/c/541803/ [4] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126428.html Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Feb 7, 2018 at 11:31 PM, Tony Breeds wrote: > On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: > > > Okay It's safe to ignore then ;P We should probably remove it from > > projects.txt if it really is empty I'll propose that. > > Oh my bad, ironic-python-agent-builder was included as it's included as > an ironic project[1] NOT because it;s listed in projects.txt. Given > that it's clearly not for me to remove anything. > > Having said that if the project hasn't had any updates at all since it's > creation in July 2017 perhaps it's no longer needed and could be > removed? > > Yours Tony. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/ > reference/projects.yaml#n1539 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Feb 7 22:24:31 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 7 Feb 2018 16:24:31 -0600 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> <20180207213102.GK23143@thor.bakeyournoodle.com> Message-ID: <20180207222431.hizrqxk6mhhlfcaz@gentoo.org> On 18-02-08 00:08:40, Ivan Kolodyazhny wrote: > Hi Matt, > > As discussed earlier today at the Horizon's meeting [2], we're not going to > release horizon-cisco-ui and django_openstack_auth because of projects > retirement [3] and [4]. > > > [2] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-02- > 07-20.02.html > [3] https://review.openstack.org/#/c/541803/ > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126428.html > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Wed, Feb 7, 2018 at 11:31 PM, Tony Breeds > wrote: > > > On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: > > > > > Okay It's safe to ignore then ;P We should probably remove it from > > > projects.txt if it really is empty I'll propose that. > > > > Oh my bad, ironic-python-agent-builder was included as it's included as > > an ironic project[1] NOT because it;s listed in projects.txt. Given > > that it's clearly not for me to remove anything. > > > > Having said that if the project hasn't had any updates at all since it's > > creation in July 2017 perhaps it's no longer needed and could be > > removed? > > > > Yours Tony. > > > > [1] http://git.openstack.org/cgit/openstack/governance/tree/ > > reference/projects.yaml#n1539 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > Yep, already removing horizon-cisco-ui from requirements. (since infra jumped the gun on us :P ) -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sombrafam at gmail.com Wed Feb 7 23:25:07 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 7 Feb 2018 21:25:07 -0200 Subject: [openstack-dev] [glance][cinder]Question about cinder as glance store In-Reply-To: <20180207102704.xq7fch4apuqimqif@localhost> References: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> <20180207102704.xq7fch4apuqimqif@localhost> Message-ID: That will depend on the Cinder/OS-brick iscsiadm versions right? Can you tell what are the versions from where the problem was fixed? Erlon 2018-02-07 8:27 GMT-02:00 Gorka Eguileor : > On 07/02, Rikimaru Honjo wrote: > > Hello, > > > > I'm planning to use cinder as glance store. > > And, I'll setup cinder to connect storage by iSCSI multipath. > > > > In this case, can I run glance-api and cinder-volume on the same node? > > > > In my understanding, glance-api will attach a volume to own node and > > write a uploaded image to the volume if glance backend is cinder. > > I afraid that the race condition of cinder-volume's iSCSI operations > > and glance-api's iSCSI operations. > > Is there possibility of occurring it? > > -- > > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > > Rikimaru Honjo > > E-mail:honjo.rikimaru at po.ntt-tx.co.jp > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Hi, > > When properly set with the right configuration and the right system and > OpenStack packages, Cinder, OS-Brick, and Nova no longer have race > conditions with iSCSI operations anymore (single or multipathed), not > even with drivers that do "shared target". > > So I would assume that Glance won't have any issues either as long as > it's properly making the Cinder and OS-Brick calls. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 8 00:05:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 00:05:32 +0000 Subject: [openstack-dev] [All][Elections] End of PTL Nominations Message-ID: Hello Everyone! The PTL Nomination period is now over. The official candidate list is available on the election website[0]. There are 0 projects without candidates, so the TC will not have to appoint any PTL's. There are 3 projects that will have elections: Kolla, QA, & Mistral. The details for those will be posted shortly after we setup the CIVS system. Thank you, - Kendall Nelson (diablo_rojo) [0] http://governance.openstack.org/election/#Rocky-ptl-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From borne.mace at oracle.com Thu Feb 8 00:03:18 2018 From: borne.mace at oracle.com (Borne Mace) Date: Wed, 7 Feb 2018 16:03:18 -0800 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> Message-ID: <8fd8b7d0-d7d0-c178-2da9-2697d5932a6e@oracle.com> I would be interested in taking part in this discussion as well, -- bm On 02/07/2018 03:42 AM, Thierry Carrez wrote: > Hi everyone, > > I was wondering if anyone would be interested in brainstorming the > question of how to better align our release cycle and stable branch > maintenance with the OpenStack downstream consumption models. That > includes discussing the place of the distributions, the need for LTS, > and where does the open source upstream project stop. > > I have hesitated to propose it earlier, as it sounds like a topic that > should be discussed with the wider community at the Forum. And it will, > but it feels like this needs a deeper pre-discussion in a productive > setting, and tonyb and eumel8 have been proposing that topic on the > missing topics etherpad[1], so we might as well take some time at the > PTG to cover that. > > Would anyone be interested in such a discussion ? It would be scheduled > on the Tuesday. How much time would we need ? I was thinking we could > use only Tuesday afternoon. > > [1] https://etherpad.openstack.org/p/PTG-Dublin-missing-topics > From thingee at gmail.com Thu Feb 8 00:25:35 2018 From: thingee at gmail.com (Mike Perez) Date: Thu, 8 Feb 2018 11:25:35 +1100 Subject: [openstack-dev] [ptg] Lightning talks Message-ID: <20180208002535.GA14568@gmail.com> Hey all! I'm looking for six 5-minute lightning talks for the PTG in Dublin. This will be on Friday March 2nd at 13:00-13:30 local time. Appropriate 5 minute talk examples: * Neat features in libraries like oslo that we should consider adopting in our community wide goals. * Features and tricks in your favorite editor that makes doing work easier. * Infra tools that maybe not a lot of people know about yet. Zuul v3 explained in five minutes anyone? * Some potential API specification from the API SIG that we should adopt as a community wide goal. Please email me DIRECTLY the following information: Title: Speaker(s) full name: Abstract: Link to presentation or attachment if you have it already. Laptop on stage will be loaded with your presentation already. I'll have open office available so odp, odg, otp, pdf, limited ppt format support. Submission deadline is February 16 00:00 UTC, and then I'll send confirmation emails to speakers requesting for slides. Thank you, looking forward to hearing some great talks from our community! -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Feb 8 00:31:17 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 00:31:17 +0000 Subject: [openstack-dev] [All][Elections][QA][Mistral][Kolla] Polling Begins! Message-ID: Hello! Polls for PTL elections are now open and will remain open for you to cast your vote until Feb 14, 2018 23:45 UTC. We are having elections for Kolla, Mistral & QA. If you are a Foundation individual member and had a commit in one of the program's projects[0] over the Pike-Queens timeframe (22 Feb 2017 to 29 Jan 2018) then you are eligible to vote. You should find your email with a link to the Condorcet page to cast your vote in the inbox of your gerrit preferred email[1]. What to do if you don't see the email and have a commit in at least one of the programs having an election: * check the trash or spam folders of your gerrit Preferred Email address, in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[0] and email the election officials. If we can confirm that you are entitled to vote, we will add you to the voters list for the appropriate election. Our democratic process is important to the health of OpenStack, please exercise your right to vote! Candidate statements/platforms can be found linked to Candidate names on this page: http://governance.openstack.org/election/#Rocky-ptl-candidates Happy voting, -Kendall Nelson (diablo_rojo) [0] The list of the program projects eligible for electoral status: https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=aug-2017-elections [1] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Feb 8 00:50:26 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 8 Feb 2018 08:50:26 +0800 Subject: [openstack-dev] [All][Elections] End of PTL Nominations In-Reply-To: References: Message-ID: Hi Kendall, There is a small typo for cyborg PTL name, rushil helped e submit the patch, but the name of the PTL should be Zhipeng Huang :) if you could correct that on the governance page that would be less confusing :) On Feb 8, 2018 8:06 AM, "Kendall Nelson" wrote: Hello Everyone! The PTL Nomination period is now over. The official candidate list is available on the election website[0]. There are 0 projects without candidates, so the TC will not have to appoint any PTL's. There are 3 projects that will have elections: Kolla, QA, & Mistral. The details for those will be posted shortly after we setup the CIVS system. Thank you, - Kendall Nelson (diablo_rojo) [0] http://governance.openstack.org/election/#Rocky-ptl-candidates __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From megan at openstack.org Thu Feb 8 01:29:54 2018 From: megan at openstack.org (megan at openstack.org) Date: Wed, 7 Feb 2018 17:29:54 -0800 (PST) Subject: [openstack-dev] RefStack disable anonymous upload and database cleanup Message-ID: <1518053394.020510707@apps.rackspace.com> Hello all! Over the course of the last few months, we have implemented and merged into production the option in RefStack server to disable the ability to upload refstack results as an anonymous user. In its current state, the RefStack database includes many anonymous test results that ultimately end up unused. The RefStack team is planning to trim all anonymously upload test results in current database to contain only records used in the OpenStack Powered Trademark program. We will then disable the anonymous upload capability and require all test results to be managed with user accounts. Over the next two weeks, with a target of completing the work by February 13, we will perform a staged update the RefStack server that will: * Disable anonymous uploads. * Purge the database of unlinked anonymous test results. Test results associated with RefStack accounts will be unaffected. Please reach out to myself or any of the other RefStack team members with questions or concerns. -- Megan Guiney -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 8 03:41:59 2018 From: gmann at ghanshyammann.com (gmann) Date: Thu, 8 Feb 2018 12:41:59 +0900 Subject: [openstack-dev] [QA] Meeting Thursday Feb 8th at 8:00 UTC Message-ID: Hello everyone, Hope everyone is back from vacation. QA team is resuming the regular weekly meeting from today. OpenStack QA team IRC meeting will be Thursday, Feb 8th at 8:00 UTC in the #openstack-meeting channel. The agenda for the meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Feb_8th_2018_.280800_UTC.29 Anyone is welcome to add an item to the agenda. -gmann From kennelson11 at gmail.com Thu Feb 8 05:15:49 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 05:15:49 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos Message-ID: Hello PTLs and SIG Chairs! So here's the deal, we have 50 spots that are first come, first served. We have slots available before and after lunch both Tuesday and Thursday. The google sheet here[1] should be set up so you have access to edit, but if you can't for some reason just reply directly to me and I can add your team to the list (I need team/sig name and contact email). I will be locking the google sheet on *Monday February 26th so I need to know if your team is interested by then. * See you soon! - Kendall Nelson (diablo_rojo) [1] https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Thu Feb 8 06:06:37 2018 From: moshele at mellanox.com (Moshe Levi) Date: Thu, 8 Feb 2018 06:06:37 +0000 Subject: [openstack-dev] [ironic][neutron] bare metal on vxlan network Message-ID: Hi all, Ironic supports mutli tenancy for quite few releases and according to the spec [1] it can work with vlan/vxlan networks. I see lot of mechanism driver that support vlan network such as [2] and [3] , but I didn't find any mechanism driver that work on vxlan network. Is there a mechanism driver that can configure vtep on a switch exist for the bare metal? Help would be appreciated [1] https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html [2] https://github.com/openstack/networking-arista [3] https://github.com/openstack/networking-generic-switch -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Thu Feb 8 07:43:37 2018 From: moshele at mellanox.com (Moshe Levi) Date: Thu, 8 Feb 2018 07:43:37 +0000 Subject: [openstack-dev] [ironic][triploe] support for firmware update Message-ID: Hi all, I saw that ironic-python-agent support custom hardware manager. I would like to support firmware updates (In my case Mellanox nic) and I was wandering how custom hardware manager can be used in such case? How it is integrated with ironic-python agent and also is there an integration to tripleO as well. The use case for use is just to make sure the correct firmware is installed on the nic and if not update it during the triple deployment. [1] - https://docs.openstack.org/ironic-python-agent/pike/contributor/hardware_managers.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Thu Feb 8 08:08:33 2018 From: hjensas at redhat.com (Harald Jensas) Date: Thu, 8 Feb 2018 09:08:33 +0100 Subject: [openstack-dev] [tripleo] FFE - Feuture Freeze Exception request for Routed Spine and Leaf Deployment In-Reply-To: References: <1517570931.6277.15.camel@redhat.com> Message-ID: Hi, Thanks for all the reviews. Just one more + CI change and docs left now. **HEADs UP**: I think we might have broken ovb jobs until 537830 is landed once packages are promoted. This is due to a change in Ironic[1] that I realized yesterday is not yet in the packages used by tripleo CI. We should make sure the Prep-CI patch below lands before we have packages promoted. * Prep-CI for routed-networks changes https://review.openstack.org/#/c/541678/ * Install and enable neutron baremetal mech plugin https://review.openstack.org/537830 This needs a rebase, I will do it today. It also needs packages that is not available in repos used by CI. tripleo-docs ------------ * Documentation - TripleO routed-spine-and-leaf https://review.openstack.org/#/c/539939/ I will go over this again today, but so far reviews are good. // Harald [1] https://review.openstack.org/#/c/536040/ On Mon, Feb 5, 2018 at 3:42 AM, Emilien Macchi wrote: > > > On Fri, Feb 2, 2018 at 3:28 AM, Harald Jensås wrote: > >> Requesting: >> Feuture Freeze Exception request for Routed Spine and Leaf Deployment >> >> Blueprints: >> https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- >> ironic-inspector >> >> https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks- >> deployment >> >> All external dependencies for Routed Spine and Leaf Deployement have >> finally landed. (Except puppet module changes.) >> >> >> Pros >> ==== >> >> This delivers a feature that has been requested since the Kilo release. >> It makes TripleO more viable in large deployments as well as in edge >> use cases where openstack services are not deployed in one datacenter. >> >> The core piece in this is the neutron segments service_plugin. This has >> been around since newton. Most of the instack-undercloud patches were >> first proposed during ocata. >> >> The major change is in the undercloud. In tripleo-heat-templates we >> need just a small change to ensure we get ip addresses allocated from >> neutron when segments service plug-in is enabled in neutron. The >> overcloud configuration stays the same, we already do have users >> deploying routed networks in the isolated networks using composable >> networks so we know it works. >> >> >> Risks >> ===== >> >> I see little risk introducing a regression to current functionality >> with these changes. The major part of the undercloud patches has been >> around for a long time and passing CI. >> >> The format of undercloud.conf is changed, options are deprecated and >> new options are added to enable multiple control plane subnets/l2- >> segments to be defined. All options are properly deprectated, so >> using a configuration file from pike will still work. >> >> >> >> ===================================== >> The list of patches that need to land >> ===================================== >> >> instack-undercloud >> ------------------ >> >> * Tripleo routed networks ironic inspector, and Undercloud >> https://review.openstack.org/#/c/437544/ >> * Move ctlplane network/subnet setup to python >> https://review.openstack.org/533364 >> * Update config to use per network groups >> https://review.openstack.org/533365 >> * Update validations to validate all subnets >> https://review.openstack.org/533366 >> * Add support for multiple inspection subnets >> https://review.openstack.org/533367 >> * Create static routes for remote subnets >> https://review.openstack.org/533368 >> * Add per subnet network cidr nat rules >> https://review.openstack.org/533369 >> * Add per subnet masquerading >> https://review.openstack.org/533370 >> * Install and enable neutron baremetal mech plugin >> https://review.openstack.org/537830 >> >> tripleo-heat-templates >> ---------------------- >> >> * Add subnet property to ctlplane network for server resources >> https://review.openstack.org/473817 >> >> tripleo-docs >> ------------ >> >> * Documentation - TripleO routed-spine-and-leaf >> https://review.openstack.org/#/c/539939/ >> >> puppet-neutron >> -------------- >> >> * Add networking-baremetal ml2 plug-in >> https://review.openstack.org/537826 >> * Add networking-baremetal - ironic-neutron-agent >> https://review.openstack.org/539405 >> >> > I'm a bit concerned by the delay of this request. Feature freeze request > deadline was 10 days ago: > https://releases.openstack.org/queens/schedule.html#q-ff > > We're now in the process on producing a release candidate. The amount of > code that needs to land to have the feature completed isn't small but it > looks like well tested and you seems pretty confident. > I'm not sure what to vote on this one tbh because yeah the use-case is > super important, and we know how Queens release is important to us. But at > the same time there is a risk to introduce problems, and delay the > potentially delay the release and after the delivery of other features... > > I guess I'm ok as long as all patches pass ALL CI jobs without exception > and are carefully tested and reviewed. > > Thanks, > -- > Emilien Macchi > -- |Harald Jensås | Cloud Success Architect |hjensas at redhat.com | www.redhat.com |+46 (0)701 91 23 17 | hjensas:irc -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 8 09:03:05 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Feb 2018 10:03:05 +0100 Subject: [openstack-dev] [ptg] Track around release cycle, consumption models and stable branches In-Reply-To: <8fd8b7d0-d7d0-c178-2da9-2697d5932a6e@oracle.com> References: <02df94bd-1a51-e13c-1255-ba6631303937@openstack.org> <8fd8b7d0-d7d0-c178-2da9-2697d5932a6e@oracle.com> Message-ID: <72305d40-6454-9516-4e2e-876fb0b507f0@openstack.org> Borne Mace wrote: > I would be interested in taking part in this discussion as well, OK I think we have enough people to warrant a spot on the pre-scheduled tracks (Tuesday afternoon). I'll make it happen. Cheers, -- Thierry Carrez (ttx) From honjo.rikimaru at po.ntt-tx.co.jp Thu Feb 8 09:06:30 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Thu, 8 Feb 2018 18:06:30 +0900 Subject: [openstack-dev] [glance][cinder]Question about cinder as glance store In-Reply-To: <20180207102704.xq7fch4apuqimqif@localhost> References: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> <20180207102704.xq7fch4apuqimqif@localhost> Message-ID: Hello Gorka, Thank you for replying! I'll try to run glance-api and cinder-volume on the same node according to your information. On 2018/02/07 19:27, Gorka Eguileor wrote: > On 07/02, Rikimaru Honjo wrote: >> Hello, >> >> I'm planning to use cinder as glance store. >> And, I'll setup cinder to connect storage by iSCSI multipath. >> >> In this case, can I run glance-api and cinder-volume on the same node? >> >> In my understanding, glance-api will attach a volume to own node and >> write a uploaded image to the volume if glance backend is cinder. >> I afraid that the race condition of cinder-volume's iSCSI operations >> and glance-api's iSCSI operations. >> Is there possibility of occurring it? >> -- >> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ >> Rikimaru Honjo >> E-mail:honjo.rikimaru at po.ntt-tx.co.jp >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Hi, > > When properly set with the right configuration and the right system and > OpenStack packages, Cinder, OS-Brick, and Nova no longer have race > conditions with iSCSI operations anymore (single or multipathed), not > even with drivers that do "shared target". > > So I would assume that Glance won't have any issues either as long as > it's properly making the Cinder and OS-Brick calls. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From thierry at openstack.org Thu Feb 8 09:08:01 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Feb 2018 10:08:01 +0100 Subject: [openstack-dev] [ptg] Lightning talks In-Reply-To: <20180208002535.GA14568@gmail.com> References: <20180208002535.GA14568@gmail.com> Message-ID: <0bac74b5-4500-0280-2cc9-54d2fa2337aa@openstack.org> Mike Perez wrote: > [...] > Appropriate 5 minute talk examples: > * Neat features in libraries like oslo that we should consider adopting in our > community wide goals. > * Features and tricks in your favorite editor that makes doing work easier. > * Infra tools that maybe not a lot of people know about yet. Zuul v3 explained > in five minutes anyone? Note that we'll have an infra talk about Zuulv3 (and other things you should know about OpenStack project infrastructure in 2018) on the Tuesday, so that is likely to be covered already :) > * Some potential API specification from the API SIG that we should adopt as > a community wide goal. I'd say it's also fine to talk about something of interest to the PTG crowd that you're passionate about and is not directly tied to OpenStack! Cheers, -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From rico.lin.guanyu at gmail.com Thu Feb 8 09:38:20 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 8 Feb 2018 17:38:20 +0800 Subject: [openstack-dev] [heat][rally] What should we do with legacy-rally-dsvm-fakevirt-heat In-Reply-To: References: Message-ID: Thx Andrey, looking forward to new rally job At meanwhile, seems current job is broken [1] and we're expecting for a new job to replace. We will remove the old legacy one (see patch [2]) for now if that won't break rally (in any way). I'm changing only config for heat (under zuul.d/projects.yaml), won't touch `legacy-rally-dsvm-fakevirt-heat` itself (I guess that is up to rally team to remove it later) [1] http://logs.openstack.org/29/531929/3/experimental/legacy-rally-dsvm-fakevirt-heat/cd797f4/job-output.txt.gz#_2018-02-08_05_02_37_258609 [2] https://review.openstack.org/542111 2018-02-08 3:39 GMT+08:00 Andrey Kurilin : > Hi Rico and stackers, > > Thanks for raising this topic. > > Short answer: please leave it as is for now. Rally team will work on > ZuulV3 jobs soon. > > Detailed: We are planning to make some big changes in our architecture > which includes splitting the main repo into a separate repository for a > framework and a separate repository for all OpenStack plugins. > To minimize work across all projects which have Rally job, we decided to > pause working on ZuulV3 until the split will be finished. > As for estimates, I'm planning to make the final release before splitting > today or tomorrow. As soon as new release will be ready, we will start > working on splitting and CI as well. > > Thanks for the patient and for using Rally! > > 2018-02-07 10:13 GMT+02:00 Rico Lin : > >> Hi heat and rally team >> >> Right now, in heat's zuul jobs. We still got one legacy job to change >> `legacy-rally-dsvm-fakevirt-heat` [1] which I already put a patch out >> here [2], but after discussion with infra team, it seems best if we can >> define this in rally, and reference it in heat. >> So my question to rally team for all these will be, do we still need this >> job? and how you guys think about if we put that into rally? >> >> [1] https://github.com/openstack-infra/project-config/blob/ >> master/zuul.d/projects.yaml#L6979 >> [2] https://review.openstack.org/#/c/509141 >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards, > Andrey Kurilin. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Thu Feb 8 10:05:23 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Thu, 8 Feb 2018 10:05:23 +0000 Subject: [openstack-dev] [release][requirements][monasca] FFE request for monasca-common Message-ID: <0ce6ec79a4b742dd8475dacfb0bed15c@R01UKEXCASM126.r01.fujitsu.local> Hello, I would like to request FFE for monasca-common to be bumped in upper constraints. The version has been bumped together with the rest of Monasca components [1]. Monasca-common is used only in Monasca projects [2]. Best greetings Witek [1] https://review.openstack.org/541767 [2] http://codesearch.openstack.org/?q=monasca-common&i=nope&files=.*requirements.*&repos= From andr.kurilin at gmail.com Thu Feb 8 10:12:11 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 8 Feb 2018 12:12:11 +0200 Subject: [openstack-dev] [heat][rally] What should we do with legacy-rally-dsvm-fakevirt-heat In-Reply-To: References: Message-ID: Hi Rico 2018-02-08 11:38 GMT+02:00 Rico Lin : > Thx Andrey, looking forward to new rally job > > At meanwhile, seems current job is broken [1] > Based on the error, the fix requires +2;-2 change to https://github.com/openstack/heat/blob/master/rally-scenarios/plugins/stack_output.py . I can propose a patch with a fix and we can leave legacy-rally job at experimental queue. Or we can return to this after sometime. I'm ok about both cases. > and we're expecting for a new job to replace. > We will remove the old legacy one (see patch [2]) for now if that won't > break rally (in any way). > I'm changing only config for heat (under zuul.d/projects.yaml), won't > touch `legacy-rally-dsvm-fakevirt-heat` itself (I guess that is up to > rally team to remove it later) > > [1] http://logs.openstack.org/29/531929/3/experimental/ > legacy-rally-dsvm-fakevirt-heat/cd797f4/job-output.txt. > gz#_2018-02-08_05_02_37_258609 > [2] https://review.openstack.org/542111 > > 2018-02-08 3:39 GMT+08:00 Andrey Kurilin : > >> Hi Rico and stackers, >> >> Thanks for raising this topic. >> >> Short answer: please leave it as is for now. Rally team will work on >> ZuulV3 jobs soon. >> >> Detailed: We are planning to make some big changes in our architecture >> which includes splitting the main repo into a separate repository for a >> framework and a separate repository for all OpenStack plugins. >> To minimize work across all projects which have Rally job, we decided to >> pause working on ZuulV3 until the split will be finished. >> As for estimates, I'm planning to make the final release before splitting >> today or tomorrow. As soon as new release will be ready, we will start >> working on splitting and CI as well. >> >> Thanks for the patient and for using Rally! >> >> 2018-02-07 10:13 GMT+02:00 Rico Lin : >> >>> Hi heat and rally team >>> >>> Right now, in heat's zuul jobs. We still got one legacy job to change >>> `legacy-rally-dsvm-fakevirt-heat` [1] which I already put a patch out >>> here [2], but after discussion with infra team, it seems best if we can >>> define this in rally, and reference it in heat. >>> So my question to rally team for all these will be, do we still need >>> this job? and how you guys think about if we put that into rally? >>> >>> [1] https://github.com/openstack-infra/project-config/blob/m >>> aster/zuul.d/projects.yaml#L6979 >>> [2] https://review.openstack.org/#/c/509141 >>> -- >>> May The Force of OpenStack Be With You, >>> >>> *Rico Lin*irc: ricolin >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Best regards, >> Andrey Kurilin. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Thu Feb 8 10:20:56 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Thu, 8 Feb 2018 10:20:56 +0000 Subject: [openstack-dev] [monasca] Feature freeze lifted Message-ID: Hello, Yesterday I have created stable/queens branches for our repos. It will be used to create the final Queens release. We can continue the work on new features on master. Cheers Witek From dtantsur at redhat.com Thu Feb 8 10:33:52 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 8 Feb 2018 11:33:52 +0100 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <20180207213102.GK23143@thor.bakeyournoodle.com> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> <20180207213102.GK23143@thor.bakeyournoodle.com> Message-ID: <7921c7b2-8841-86bb-abe3-09f64d624208@redhat.com> On 02/07/2018 10:31 PM, Tony Breeds wrote: > On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: > >> Okay It's safe to ignore then ;P We should probably remove it from >> projects.txt if it really is empty I'll propose that. > > Oh my bad, ironic-python-agent-builder was included as it's included as > an ironic project[1] NOT because it;s listed in projects.txt. Given > that it's clearly not for me to remove anything. > > Having said that if the project hasn't had any updates at all since it's > creation in July 2017 perhaps it's no longer needed and could be > removed? We do plan to use it, we just never had time to populate it :( > > Yours Tony. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n1539 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From geguileo at redhat.com Thu Feb 8 13:32:54 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 8 Feb 2018 14:32:54 +0100 Subject: [openstack-dev] [glance][cinder]Question about cinder as glance store In-Reply-To: References: <624e7b1f-c503-fdea-9866-687a8cc14c8f@po.ntt-tx.co.jp> <20180207102704.xq7fch4apuqimqif@localhost> Message-ID: <20180208133254.j446pwegs4w6dqma@localhost> On 07/02, Erlon Cruz wrote: > That will depend on the Cinder/OS-brick iscsiadm versions right? Can you > tell what are the versions from where the problem was fixed? > > Erlon > Hi, Like you said it depends on your Cinder and OS-Brick versions, and the open-iscsi package will be different depending on your distro, for RHOS 7.4 this is iscsi-initiator-utils version 6.2.0.874-2 or later. Things working fine may also depend on your multipath and Cinder/Nova configuration as well as LVM filters (if your deployment can have images that have LVM). I think that's all that need to be properly setup. Cheers, Gorka. > 2018-02-07 8:27 GMT-02:00 Gorka Eguileor : > > > On 07/02, Rikimaru Honjo wrote: > > > Hello, > > > > > > I'm planning to use cinder as glance store. > > > And, I'll setup cinder to connect storage by iSCSI multipath. > > > > > > In this case, can I run glance-api and cinder-volume on the same node? > > > > > > In my understanding, glance-api will attach a volume to own node and > > > write a uploaded image to the volume if glance backend is cinder. > > > I afraid that the race condition of cinder-volume's iSCSI operations > > > and glance-api's iSCSI operations. > > > Is there possibility of occurring it? > > > -- > > > _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ > > > Rikimaru Honjo > > > E-mail:honjo.rikimaru at po.ntt-tx.co.jp > > > > > > > > > > > > ____________________________________________________________ > > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > Hi, > > > > When properly set with the right configuration and the right system and > > OpenStack packages, Cinder, OS-Brick, and Nova no longer have race > > conditions with iSCSI operations anymore (single or multipathed), not > > even with drivers that do "shared target". > > > > So I would assume that Glance won't have any issues either as long as > > it's properly making the Cinder and OS-Brick calls. > > > > Cheers, > > Gorka. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Thu Feb 8 13:34:25 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 13:34:25 -0000 Subject: [openstack-dev] [senlin] senlin 5.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for senlin for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/senlin/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/senlin/log/?h=stable/queens Release notes for senlin can be found at: http://docs.openstack.org/releasenotes/senlin/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/senlin-tempest-plugin and tag it *queens-rc-potential* to bring it to the senlin release crew's attention. From dirk at dmllr.de Thu Feb 8 13:41:21 2018 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 8 Feb 2018 14:41:21 +0100 Subject: [openstack-dev] [release][requirements][monasca] FFE request for monasca-common In-Reply-To: <0ce6ec79a4b742dd8475dacfb0bed15c@R01UKEXCASM126.r01.fujitsu.local> References: <0ce6ec79a4b742dd8475dacfb0bed15c@R01UKEXCASM126.r01.fujitsu.local> Message-ID: 2018-02-08 11:05 GMT+01:00 Bedyk, Witold : > I would like to request FFE for monasca-common to be bumped in upper constraints. The version has been bumped together with the rest of Monasca components [1]. Monasca-common is used only in Monasca projects [2]. The changes between 2.7.0 and 2.8.0 are: +2.8.0 +----- + +* Enable tempest tests as voting +* Add messages for testing unicode +* Zuul: Remove project name +* Remove not used mox library +* Updated from global requirements +* Updated from global requirements The requirements changes are removal of mox, and fix for oslotest to sync the queens level requirements. so this looks good to me. +1 Greetings, Dirk From sean.mcginnis at gmx.com Thu Feb 8 13:48:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Feb 2018 07:48:53 -0600 Subject: [openstack-dev] [release] Release countdown for week R-2, February 10 - 16 Message-ID: <20180208134853.GA26293@sm-xps> The end is near! Before I continue with the countdown info for next week, just a reminder that today, 8th February, is the RC1 milestone. Please make sure your project submits a release patch for an RC (cycle-with-milestones) or final (cycle-with-intermediary) release with a request to create a stable/queens branch today if you have not already done so. Development Focus ----------------- Teams should be working on release critical bugs in preparation of the final release. General Information ------------------- Any cycle-with-milestones projects that missed the RC1 deadline should prepare an RC1 release as soon as possible. After all of the cycle-with-milestone projects have branched we will branch devstack, grenade, and the requirements repos. This will effectively open them back up for Rocky development, though the focus should still be on finishing up Queens until the final release. Actions --------- Watch for any translation patches coming through and merge them quickly. If your project has a stable/queens branch created, please make sure those patches are also merged there. Keep in mind there will need to be a final release candidate cut to capture any merged translations and critical bug fixes from this branch. Please also check for completeness in release notes and add any relevant "prelude" content. These notes are targetted for the downstream consumers of your project, so it would be great to include any useful information for those that are going to pick up and use or deploy the Queens version of your project. We also have the cycle-highlights information in the project deliverable files. This one is targeted at marketing and other consumers that typically been pinging PTLs every release asking for "what's new" in this release. If you have not done so already, please add a few highlights for your team that would be useful for this kind of consumer. This would be a good time for any release:independent projects to add the history for any releases not yet listed in their deliverable file. These files are under the deliverable/_independent directory in the openstack/releases repo. We are still missing cycle-with-intermediary releases for heat-translator, patrole, and tacker-horizon. If we do not receive release requests for these repos soon we will be forced to create a release from the latest commit to create a stable/queens branch. The release team would rather not be the ones initiating this release, so please submit a release patch for these as soon as possible. Upcoming Deadlines & Dates -------------------------- Rocky PTL election close: February 14 Final Queens Release Candidates: February 22 Rocky PTG in Dublin: Week of February 26 Queens cycle-trailing RC deadline: March 1 -- Sean McGinnis (smcginnis) From thierry at openstack.org Thu Feb 8 14:01:08 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Feb 2018 15:01:08 +0100 Subject: [openstack-dev] [requirements][release] FFE for constraints update for python-tackerclient bug-fix release In-Reply-To: <20180207160742.ejnudolq774x6zu6@gentoo.org> References: <20180207160742.ejnudolq774x6zu6@gentoo.org> Message-ID: <4ef75b2c-5b93-4e09-a178-e9c17fec1461@openstack.org> Matthew Thode wrote: > It should have time to get in for the freeze, the question I have is > 'What in openstack is broken if we update upper-contraints after the > freeze instead of before?' > A follow up question is 'does this need a global-requirements.txt bump?' In other words, will the new tacker just work with the old client release (just not exposing the new feature), or will it fail ? If it fails, the global-requirements needs to be bumped to require >=0.12.0... -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From armamig at gmail.com Thu Feb 8 14:06:06 2018 From: armamig at gmail.com (Armando M.) Date: Thu, 8 Feb 2018 09:06:06 -0500 Subject: [openstack-dev] [neutron] cycle highlights for sub-projects In-Reply-To: References: Message-ID: On 2 February 2018 at 13:33, Armando M. wrote: > Hi neutrinos, > > RC1 is fast approaching and this time we can add highlights to the release > files [1]. If I can ask you anyone interested in contributing to the > highlights: please review [2]. > > Miguel and I will make sure they are compiled correctly. We have time > until Feb 9 to get this done. > > Many thanks, > Armando > > [1] http://lists.openstack.org/pipermail/openstack-dev/ > 2017-December/125613.html > [2] https://review.openstack.org/#/c/540476/ > Reminder before we cut RC1 by EOB today. Cheers, Armando -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 8 14:54:44 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 14:54:44 -0000 Subject: [openstack-dev] [octavia] neutron-lbaas-dashboard 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for neutron-lbaas-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-lbaas-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard/log/?h=stable/queens Release notes for neutron-lbaas-dashboard can be found at: http://docs.openstack.org/releasenotes/neutron-lbaas-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/907 and tag it *queens-rc-potential* to bring it to the neutron-lbaas-dashboard release crew's attention. From no-reply at openstack.org Thu Feb 8 14:55:32 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 14:55:32 -0000 Subject: [openstack-dev] [octavia] octavia 2.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for octavia for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/octavia/log/?h=stable/queens Release notes for octavia can be found at: http://docs.openstack.org/releasenotes/octavia/ From no-reply at openstack.org Thu Feb 8 14:56:28 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 14:56:28 -0000 Subject: [openstack-dev] [octavia] neutron-lbaas 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for neutron-lbaas for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-lbaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron-lbaas/log/?h=stable/queens Release notes for neutron-lbaas can be found at: http://docs.openstack.org/releasenotes/neutron-lbaas/ From no-reply at openstack.org Thu Feb 8 14:57:29 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 14:57:29 -0000 Subject: [openstack-dev] [octavia] octavia-dashboard 1.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for octavia-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/queens Release notes for octavia-dashboard can be found at: http://docs.openstack.org/releasenotes/octavia-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/909 and tag it *queens-rc-potential* to bring it to the octavia-dashboard release crew's attention. From ifat.afek at nokia.com Thu Feb 8 15:37:11 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 8 Feb 2018 15:37:11 +0000 Subject: [openstack-dev] [vitrage] Tagged Vitrage release candidates and created stable/queens branches Message-ID: <8C5A6291-2343-4C6A-B9E5-D6102E46232A@nokia.com> Hi, I tagged the following release candidates for Vitrage: vitrage 2.1.0 vitrage-dashboard 1.4.1 python-vitrageclient 2.0.0 (already tagged) All these repositories now have stable/queens branches, so the master can be used for Rocky development. Thanks, Ifat. From prometheanfire at gentoo.org Thu Feb 8 15:46:27 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 8 Feb 2018 09:46:27 -0600 Subject: [openstack-dev] [release][requirements][monasca] FFE request for monasca-common In-Reply-To: References: <0ce6ec79a4b742dd8475dacfb0bed15c@R01UKEXCASM126.r01.fujitsu.local> Message-ID: <20180208154627.k7crlg5xsn7rlazl@gentoo.org> On 18-02-08 14:41:21, Dirk Müller wrote: > 2018-02-08 11:05 GMT+01:00 Bedyk, Witold : > > > I would like to request FFE for monasca-common to be bumped in upper constraints. The version has been bumped together with the rest of Monasca components [1]. Monasca-common is used only in Monasca projects [2]. > > The changes between 2.7.0 and 2.8.0 are: > > +2.8.0 > +----- > + > +* Enable tempest tests as voting > +* Add messages for testing unicode > +* Zuul: Remove project name > +* Remove not used mox library > +* Updated from global requirements > +* Updated from global requirements > > > The requirements changes are removal of mox, and fix for oslotest to > sync the queens level requirements. so this looks good to me. +1 > Yep, LGTM as well (just approved) -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kendall at openstack.org Thu Feb 8 15:57:58 2018 From: kendall at openstack.org (Kendall Waters) Date: Thu, 8 Feb 2018 09:57:58 -0600 Subject: [openstack-dev] [PTG] Last Chance for PTG Dublin Tickets Message-ID: Hi everyone, Yesterday we sold out the upcoming Dublin PTG, and have since received many requests for more tickets. We have been working with the venue to accommodate extra capacity, but every additional attendee incrementally increases our costs $600. We understand the importance of this event and the need to have key team members present, so we have negotiated an additional 100 tickets which we will partially subsidize to be sold at a $400 ticket price. We recognize this is significantly more than we have had to charge in the past, but we still hope you can join us in Dublin. Please let me know if you have any questions. Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Thu Feb 8 16:08:39 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 8 Feb 2018 18:08:39 +0200 Subject: [openstack-dev] [horizon] PTG Planning Etherpad Message-ID: Hi team, In case if you missed it, it's a friendly reminder that we've got etherpad [1] with Rocky PTG topics proposals. If you're going to attend it or want to get some topic discussed, please add your name and topic to the list [1]. Hope to see you all in Dublin. [1] https://etherpad.openstack.org/p/horizon-ptg-rocky Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 8 16:15:56 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 16:15:56 -0000 Subject: [openstack-dev] nova_powervm 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for nova_powervm for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/queens Release notes for nova_powervm can be found at: http://docs.openstack.org/releasenotes/nova_powervm/ From no-reply at openstack.org Thu Feb 8 16:29:43 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 16:29:43 -0000 Subject: [openstack-dev] networking-powervm 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-powervm for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-powervm/log/?h=stable/queens Release notes for networking-powervm can be found at: http://docs.openstack.org/releasenotes/networking-powervm/ From alex.kavanagh at canonical.com Thu Feb 8 16:42:29 2018 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Thu, 8 Feb 2018 16:42:29 +0000 Subject: [openstack-dev] [charms] Propose Dmitrii Shcherbakov for OpenStack Charmers team. Message-ID: Hi I'd like to propose Dmitrii Shcherbakov to join the launchpad "OpenStack Charmers" team. He's done some tremendous work on existing the charms, has developed some new ones, and has really developed his understanding of configuring and implementing OpenStack. I think he'd make a great addition to the team. Thanks Alex. -- Alex Kavanagh - Software Engineer Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Thu Feb 8 16:45:02 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 08 Feb 2018 16:45:02 +0000 Subject: [openstack-dev] [neutron] [networking-ovn] Rocky PTG Message-ID: I have created an etherpad for networking-ovn, if https://etherpad.openstack.org/p/networking-ovn-ptg-rocky with some topics I thought are relevant. But please feel free to add anything you believe it could be interesting and fill attendance so it's easier to sync & meet. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Thu Feb 8 16:56:21 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 8 Feb 2018 10:56:21 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Hi Kendall, Can you add Neutron on Thursday at 2pm. If that is not available, then anytime Wednesday or Thursday. I am the contact: miguel at mlavalle.com Thanks On Wed, Feb 7, 2018 at 11:15 PM, Kendall Nelson wrote: > Hello PTLs and SIG Chairs! > > So here's the deal, we have 50 spots that are first come, first served. We > have slots available before and after lunch both Tuesday and Thursday. > > The google sheet here[1] should be set up so you have access to edit, but > if you can't for some reason just reply directly to me and I can add your > team to the list (I need team/sig name and contact email). > > I will be locking the google sheet on *Monday February 26th so I need to > know if your team is interested by then. * > > See you soon! > > - Kendall Nelson (diablo_rojo) > > [1] https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoT > ypX66eNURsopQY/edit?usp=sharing > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 8 17:03:44 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 17:03:44 -0000 Subject: [openstack-dev] [blazar] blazar 1.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for blazar for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/blazar/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/blazar/log/?h=stable/queens Release notes for blazar can be found at: http://docs.openstack.org/releasenotes/blazar/ If you find an issue that could be considered release-critical, please file it at: https://launchpad.net/blazar and tag it *queens-rc-potential* to bring it to the blazar release crew's attention. From mitchell at arista.com Thu Feb 8 17:23:11 2018 From: mitchell at arista.com (Mitchell Jameson) Date: Thu, 8 Feb 2018 09:23:11 -0800 Subject: [openstack-dev] [ironic][neutron] bare metal on vxlan network In-Reply-To: References: Message-ID: Hi Moshe, I'm not aware of any mechanism drivers that actually create the VTEP (that will be a manual step,) but there are drivers that support Hierarchical Port Binding [1]. The networking-arista mechanism driver [2] is one such driver, but there are others. Such drivers will configure a VLAN to VNI mapping on switches based on the segmentation IDs of the neutron network segments bound at each port binding level. In such a deployment, you'd create a VXLAN network in neutron. You could then launch a baremetal instance on that network. The mechanism driver will then be responsible for dynamically allocating a VLAN for the baremetal<->TOR switch segment and mapping that VLAN to the VXLAN network's VNI on the TOR switch such that the baremetal instance is connected to the VXLAN fabric segment. [1] https://specs.openstack.org/openstack/neutron-specs/ specs/kilo/ml2-hierarchical-port-binding.html [2] https://github.com/openstack/networking-arista On Wed, Feb 7, 2018 at 10:06 PM, Moshe Levi wrote: > Hi all, > > > > Ironic supports mutli tenancy for quite few releases and according to the > spec [1] it can work with vlan/vxlan networks. > > I see lot of mechanism driver that support vlan network such as [2] and > [3] , but I didn't find any mechanism driver that work on vxlan network. > > Is there a mechanism driver that can configure vtep on a switch exist for > the bare metal? > > > > Help would be appreciated > > > > > > [1] https://specs.openstack.org/openstack/ironic-specs/specs/ > not-implemented/ironic-ml2-integration.html > > [2] https://github.com/openstack/networking-arista > > [3] https://github.com/openstack/networking-generic-switch > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 8 17:37:05 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 17:37:05 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Done! On Thu, 8 Feb 2018, 8:56 am Miguel Lavalle, wrote: > Hi Kendall, > > Can you add Neutron on Thursday at 2pm. If that is not available, then > anytime Wednesday or Thursday. I am the contact: miguel at mlavalle.com > > Thanks > > On Wed, Feb 7, 2018 at 11:15 PM, Kendall Nelson > wrote: > >> Hello PTLs and SIG Chairs! >> >> So here's the deal, we have 50 spots that are first come, first >> served. We have slots available before and after lunch both Tuesday and >> Thursday. >> >> The google sheet here[1] should be set up so you have access to edit, but >> if you can't for some reason just reply directly to me and I can add your >> team to the list (I need team/sig name and contact email). >> >> I will be locking the google sheet on *Monday February 26th so I need to >> know if your team is interested by then. * >> >> See you soon! >> >> - Kendall Nelson (diablo_rojo) >> >> [1] >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Thu Feb 8 17:39:20 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 08 Feb 2018 17:39:20 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Hello Kendall, QA Team Tue at 11:00 please :) Andrea On Thu, Feb 8, 2018 at 5:37 PM Kendall Nelson wrote: > Done! > > On Thu, 8 Feb 2018, 8:56 am Miguel Lavalle, wrote: > >> Hi Kendall, >> >> Can you add Neutron on Thursday at 2pm. If that is not available, then >> anytime Wednesday or Thursday. I am the contact: miguel at mlavalle.com >> >> Thanks >> >> On Wed, Feb 7, 2018 at 11:15 PM, Kendall Nelson >> wrote: >> >>> Hello PTLs and SIG Chairs! >>> >>> So here's the deal, we have 50 spots that are first come, first >>> served. We have slots available before and after lunch both Tuesday and >>> Thursday. >>> >>> The google sheet here[1] should be set up so you have access to edit, >>> but if you can't for some reason just reply directly to me and I can add >>> your team to the list (I need team/sig name and contact email). >>> >>> I will be locking the google sheet on *Monday February 26th so I need >>> to know if your team is interested by then. * >>> >>> See you soon! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] >>> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Feb 8 17:46:02 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 8 Feb 2018 11:46:02 -0600 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <3338159B-EFAC-4AC0-BB16-EBDED9BEFF57@leafe.com> Greetings OpenStack community, Today's meeting was chock-full of interesting discussion. Let me recap it for you. We began with a follow-up conversation about the use of "action" URLs (as opposed to resource-based URLs). The origin of this discussion came from an email posted to the dev list by Tommy Hu [7], describing how several types of actions are currently being handled through the cinder and nova REST interfaces. After last week's API-SIG discussion, edleafe replied [8], which was followed by some more email discussion. It seems that there is still a lot of impetus to simply "get things working" instead of "let's do it in a consistent manner across OpenStack". If you have an opinion on this issue, please reply on the mailing list! On the topic of the PTG, the SIG has created an etherpad [9] where agenda items are starting to be proposed. If you have any topic that you would like to discuss, or see discussed, please add it to that etherpad. We also decided that we are not cute enough to merit taking a group photo at the PTG. We discussed the spec by Gilles Dubreuil [10] for creating a guideline for API-Schema to make APIs more machine-discoverable. We felt that this was more of a one-off need rather than something we'd like to see rolled out across all OpenStack APIs. Furthermore, API-Schema will be problematic for services that use microversions. If you have some insight or opinions on this, please add your comments to that review. cdent then won the award for the Quote of the Day [11]. Finally, we discussed a bug [12] that is the result of the Nova API not properly including caching information in the headers of its replies. There is some pushback from the Nova team as to whether this is a bug or a request for a new feature. We unanimously agreed that it is indeed a bug, and should be remedied as soon as possible. Again, please add your perspective to that bug report. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-January/126334.html [8] http://lists.openstack.org/pipermail/openstack-dev/2018-February/126891.html [9] https://etherpad.openstack.org/p/api-sig-ptg-rocky [10] https://review.openstack.org/#/c/524467/ [11] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-08-16.00.log.html#l-131 [12] https://bugs.launchpad.net/nova/+bug/1747935 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From no-reply at openstack.org Thu Feb 8 17:47:40 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 17:47:40 -0000 Subject: [openstack-dev] [cinder] cinder 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for cinder for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cinder/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/queens Release notes for cinder can be found at: http://docs.openstack.org/releasenotes/cinder/ From johnsomor at gmail.com Thu Feb 8 18:00:58 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 8 Feb 2018 10:00:58 -0800 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Hi Kendall, Can you put Octavia down for 2:10 on Thursday after neutron? Thanks, Michael On Wed, Feb 7, 2018 at 9:15 PM, Kendall Nelson wrote: > Hello PTLs and SIG Chairs! > > So here's the deal, we have 50 spots that are first come, first served. We > have slots available before and after lunch both Tuesday and Thursday. > > The google sheet here[1] should be set up so you have access to edit, but if > you can't for some reason just reply directly to me and I can add your team > to the list (I need team/sig name and contact email). > > I will be locking the google sheet on Monday February 26th so I need to know > if your team is interested by then. > > See you soon! > > - Kendall Nelson (diablo_rojo) > > [1] > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Thu Feb 8 18:06:18 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 8 Feb 2018 13:06:18 -0500 Subject: [openstack-dev] [all][Kingbird]Multi-Region Orchestrator In-Reply-To: <5A7B2732.8040101@windriver.com> References: <7c7191c1-6bb4-66e9-fbdf-699a9841a2bb@gmail.com> <29be24fb-80c4-621b-698e-e2b45f5fcb74@gmail.com> <5A7B2732.8040101@windriver.com> Message-ID: On 07/02/18 11:20, Chris Friesen wrote: > One use-case I've seen for this sort of thing is someone that has multiple geographically-separate clouds, and maybe they want to run the same heat stack in all of them. > > Something like "create a keypair in each of the clouds with the same > public key and same name" could be done by the end user with some > coding, but it's convenient to have a tool to do it for you. You can do this inside the Heat stack: https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::KeyPair - ZB From prometheanfire at gentoo.org Thu Feb 8 18:13:14 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 8 Feb 2018 12:13:14 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: <20180208181314.tzjv4ulxca46vhw6@gentoo.org> On 18-02-08 10:00:58, Michael Johnson wrote: > Hi Kendall, > > Can you put Octavia down for 2:10 on Thursday after neutron? > Thanks, > Michael > > > On Wed, Feb 7, 2018 at 9:15 PM, Kendall Nelson wrote: > > Hello PTLs and SIG Chairs! > > > > So here's the deal, we have 50 spots that are first come, first served. We > > have slots available before and after lunch both Tuesday and Thursday. > > > > The google sheet here[1] should be set up so you have access to edit, but if > > you can't for some reason just reply directly to me and I can add your team > > to the list (I need team/sig name and contact email). > > > > I will be locking the google sheet on Monday February 26th so I need to know > > if your team is interested by then. > > > > See you soon! > > > > - Kendall Nelson (diablo_rojo) > > > > [1] > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > And Requirements after that (at 2:20 Thursday) -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Feb 8 18:21:05 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 18:21:05 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: This link might work better for everyone: https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing -Kendall (diablo_rojo) On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson wrote: > Hello PTLs and SIG Chairs! > > So here's the deal, we have 50 spots that are first come, first served. We > have slots available before and after lunch both Tuesday and Thursday. > > The google sheet here[1] should be set up so you have access to edit, but > if you can't for some reason just reply directly to me and I can add your > team to the list (I need team/sig name and contact email). > > I will be locking the google sheet on *Monday February 26th so I need to > know if your team is interested by then. * > > See you soon! > > - Kendall Nelson (diablo_rojo) > > [1] > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 8 18:22:47 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 18:22:47 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Done! On Thu, Feb 8, 2018 at 9:39 AM Andrea Frittoli wrote: > Hello Kendall, > > QA Team Tue at 11:00 please :) > > Andrea > > On Thu, Feb 8, 2018 at 5:37 PM Kendall Nelson > wrote: > >> Done! >> >> On Thu, 8 Feb 2018, 8:56 am Miguel Lavalle, wrote: >> >>> Hi Kendall, >>> >>> Can you add Neutron on Thursday at 2pm. If that is not available, then >>> anytime Wednesday or Thursday. I am the contact: miguel at mlavalle.com >>> >>> Thanks >>> >>> On Wed, Feb 7, 2018 at 11:15 PM, Kendall Nelson >>> wrote: >>> >>>> Hello PTLs and SIG Chairs! >>>> >>>> So here's the deal, we have 50 spots that are first come, first >>>> served. We have slots available before and after lunch both Tuesday and >>>> Thursday. >>>> >>>> The google sheet here[1] should be set up so you have access to edit, >>>> but if you can't for some reason just reply directly to me and I can add >>>> your team to the list (I need team/sig name and contact email). >>>> >>>> I will be locking the google sheet on *Monday February 26th so I need >>>> to know if your team is interested by then. * >>>> >>>> See you soon! >>>> >>>> - Kendall Nelson (diablo_rojo) >>>> >>>> [1] >>>> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>>> >>>> >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 8 18:23:06 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Feb 2018 18:23:06 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: <20180208181314.tzjv4ulxca46vhw6@gentoo.org> References: <20180208181314.tzjv4ulxca46vhw6@gentoo.org> Message-ID: Done! On Thu, Feb 8, 2018 at 10:13 AM Matthew Thode wrote: > On 18-02-08 10:00:58, Michael Johnson wrote: > > Hi Kendall, > > > > Can you put Octavia down for 2:10 on Thursday after neutron? > > Thanks, > > Michael > > > > > > On Wed, Feb 7, 2018 at 9:15 PM, Kendall Nelson > wrote: > > > Hello PTLs and SIG Chairs! > > > > > > So here's the deal, we have 50 spots that are first come, first > served. We > > > have slots available before and after lunch both Tuesday and Thursday. > > > > > > The google sheet here[1] should be set up so you have access to edit, > but if > > > you can't for some reason just reply directly to me and I can add your > team > > > to the list (I need team/sig name and contact email). > > > > > > I will be locking the google sheet on Monday February 26th so I need > to know > > > if your team is interested by then. > > > > > > See you soon! > > > > > > - Kendall Nelson (diablo_rojo) > > > > > > [1] > > > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > > And Requirements after that (at 2:20 Thursday) > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From billy.olsen at gmail.com Thu Feb 8 18:23:34 2018 From: billy.olsen at gmail.com (Billy Olsen) Date: Thu, 8 Feb 2018 11:23:34 -0700 Subject: [openstack-dev] [charms] Propose Dmitrii Shcherbakov for OpenStack Charmers team. In-Reply-To: References: Message-ID: Dmitrii easily gets a +1 from me! On 02/08/2018 09:42 AM, Alex Kavanagh wrote: > Hi > > I'd like to propose Dmitrii Shcherbakov to join the launchpad > "OpenStack Charmers" team.  He's done some tremendous work on existing > the charms, has developed some new ones, and has really developed his > understanding of configuring and implementing OpenStack.  I think he'd > make a great addition to the team. > > Thanks > Alex. > > > -- > Alex Kavanagh - Software Engineer > Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From andrea.frittoli at gmail.com Thu Feb 8 18:36:07 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 08 Feb 2018 18:36:07 +0000 Subject: [openstack-dev] [Openstack-sigs] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 6:21 PM Kendall Nelson wrote: > This link might work better for everyone: > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > +1 thanks this is editable > > -Kendall (diablo_rojo) > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > wrote: > >> Hello PTLs and SIG Chairs! >> >> So here's the deal, we have 50 spots that are first come, first >> served. We have slots available before and after lunch both Tuesday and >> Thursday. >> >> The google sheet here[1] should be set up so you have access to edit, but >> if you can't for some reason just reply directly to me and I can add your >> team to the list (I need team/sig name and contact email). >> >> I will be locking the google sheet on *Monday February 26th so I need to >> know if your team is interested by then. * >> >> See you soon! >> >> - Kendall Nelson (diablo_rojo) >> >> [1] >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >> >> >> >> >> _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Feb 8 19:00:52 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Feb 2018 13:00:52 -0600 Subject: [openstack-dev] [magnum][release] release-post job for openstack/releases failed Message-ID: <20180208190051.GA11528@sm-xps> The release job for magnum failed, but luckily it was after tagging and branching the release. It was not able to get to the point of uploading a tarball to http://tarballs.openstack.org/magnum/ though. The problem the job encountered is that magnum is now configured to publish to Pypi. The tricky part ends up being that the "magnum" package on Pypi is not this magnum project. It appears to be an older abandoned project by someone, not related to OpenStack. There is an openstack-magnum registered. But since the setup.cfg file in openstack/magnum has "name = magnum", it attempts to publish to the one that is not ours. I have put up a patch to openstack/magnum to change the name to openstack-magnum here: https://review.openstack.org/#/c/542371/ That, or something like it, will need to merge and be backported to stable/queens before we can get this project published. If there are any questions, please feel free to drop in to the #openstack-release channel. Thanks, Sean ----- Forwarded message from zuul at openstack.org ----- Date: Thu, 08 Feb 2018 17:09:44 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] release-post job for openstack/releases failed Reply-To: openstack-dev at lists.openstack.org Build failed. - tag-releases http://logs.openstack.org/11/1160e02315eaef3a8380af3d6dd9f707eccc214e/release-post/tag-releases/ff8305f/ : TIMED_OUT in 32m 28s - publish-static publish-static : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From sean.mcginnis at gmx.com Thu Feb 8 19:03:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Feb 2018 13:03:17 -0600 Subject: [openstack-dev] [magnum] Release of openstack/magnum failed Message-ID: <20180208190316.GB11528@sm-xps> Apologies, I forwarded the wrong one just a bit ago. See below for the actual links to the magnum release job failures if you wish to take a look. Sean ----- Forwarded message from zuul at openstack.org ----- Date: Thu, 08 Feb 2018 18:06:54 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Release of openstack/magnum failed Reply-To: openstack-dev at lists.openstack.org Build failed. - release-openstack-python http://logs.openstack.org/df/dff1ac0f8248a75c39c5b9449de0b6c83906aff5/release/release-openstack-python/e923153/ : POST_FAILURE in 7m 23s - announce-release announce-release : SKIPPED - propose-update-constraints propose-update-constraints : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From jimmy at openstack.org Thu Feb 8 19:10:58 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 08 Feb 2018 13:10:58 -0600 Subject: [openstack-dev] [OpenStack Foundation] CFP Deadline Today - OpenStack Summit Vancouver Message-ID: <5A7CA0C2.4010809@openstack.org> Hi everyone, The Vancouver Summit CFP closes *TODAY at 11:59pm Pacific Time (February 9 at 6:59am UTC).* Get your talks in for: • Container infrastructure • Edge computing • CI/CD • HPC/GPU/AI • Open source community • OpenStack private, public and hybrid cloud View topic ideas for each track HERE and submit your proposals before the deadline! Please note, the sessions that are included in your sponsorship package or purchased as an add-on do not go through the CFP process. If you have any questions, please email summit at openstack.org . Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Feb 8 19:17:17 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 8 Feb 2018 14:17:17 -0500 Subject: [openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator In-Reply-To: References: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> Message-ID: On 07/02/18 12:24, Goutham Pratapa wrote: > >Yes as you said it can be interpreted as a tool that can > orchestrate multiple-regions. Actually from your additional information I'm now getting the impression that you are, in fact, positioning this as a partial competitor to Heat. > Just to be sure does openstack already has project which can > replicate the resources and orchestrate??? OpenStack has an orchestration service - Heat - and it allows you to do orchestration across multiple regions by creating a nested Stack in an arbitrary region as a resource in a Heat Stack.[1] Heat includes the ability to create Nova keypairs[2] and even, for those users with sufficient privileges, flavors[3] and quotas[4][5][6]. (It used to be able to create Glance images as well, but this was deprecated because it is not feasible using the Glance v2 API.) [1] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::Stack [2] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::KeyPair [3] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Flavor [4] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Nova::Quota [5] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Cinder::Quota [6] https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Neutron::Quota > why because In coming > cycle our idea is that a user just gives a VM-ID or Vm-name and we > sync all the resources with which the vm is actually created > ofcourse we cant have the same network in target-region so we may > need the network-id or port-id from the target region from user so > that kingbird will boot up the requested vm in the target region(s). So it sounds like you are starting from the premise that users will create stuff in an ad-hoc way, then later discover that they need to replicate their ad-hoc deployments to multiple regions, and you're building a tool to do that. Heat, on the other hand, starts from the premise that users will invest a little up-front effort to create a declarative definition of their deployment, which they can then deploy repeatably in multiple (or the same!) regions. Our experience is that people have shown themselves to be quite willing to do this, because repeatable deployments have lots of benefits. Looking at the things you want to synchronise: * Quotas Operators can already use Heat templates to manage these if they so desire. * Flavors Some clouds allow users to create flavors, and those users can use Heat templates to manage them already. Operators can *not* use Heat templates to manage flavors in the same way that that can with quotas, because the OS::Nova::Flavor resource was designed with the above use-case in mind instead. (Specifically, it doesn't allow you to set the name.) Support has been requested for it in the past, however, and given the other kinds of admin-only resources we have in Heat (Quotas, Keystone resources) it would be consistent to modify OS::Nova::Flavor to allow this additional use case. It's possible that operators could benefit from better/other tooling for Flavors and Quotas. In fact, the reason I've pushed back against some of the admin-facing stuff in Heat is that it often seems to me that Heat is an awkward tool for managing global-singleton or tenant-local-singleton administrator resources. It's definitely fine for multiple tools to co-exist, although a separate OpenStack service with an API seems like it could be overkill to me. * Keypairs This is a non-issue IMHO. * Images I agree with what I think Jay is suggesting here - not that there should be a single global Glance handling multiple regions (locality is important for images), but definitely some sort of multi-region support in Glance (e.g. a built-in way to automatically replicate an image to other regions) would be a better solution than an external service doing it. Glance is always looking for new contributors :) Though I really think the problem here is that there aren't good ways to automate image upload in general with the Glance v2 API; the multiregion part is just a for-loop. Allowing Glance to download an image from a URL (or even if it were limited to Swift objects) instead of having to upload one to it would allow us to resurrect OS::Glance::Image in Heat. * Other user resources These are already handled, in a much more general way, by Heat. Honestly, it seems like a lot of wheels are being reinvented here. I think it would be more productive to start with a list of use cases and see whether the gaps can be covered by changes to existing services that they would consider in-scope. cheers, Zane. From james.page at canonical.com Thu Feb 8 19:38:34 2018 From: james.page at canonical.com (James Page) Date: Thu, 08 Feb 2018 19:38:34 +0000 Subject: [openstack-dev] [charms] Propose Dmitrii Shcherbakov for OpenStack Charmers team. In-Reply-To: References: Message-ID: +1 from me On Thu, 8 Feb 2018 at 18:23 Billy Olsen wrote: > Dmitrii easily gets a +1 from me! > > On 02/08/2018 09:42 AM, Alex Kavanagh wrote: > > Hi > > > > I'd like to propose Dmitrii Shcherbakov to join the launchpad > > "OpenStack Charmers" team. He's done some tremendous work on existing > > the charms, has developed some new ones, and has really developed his > > understanding of configuring and implementing OpenStack. I think he'd > > make a great addition to the team. > > > > Thanks > > Alex. > > > > > > -- > > Alex Kavanagh - Software Engineer > > Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 8 20:22:10 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:22:10 -0000 Subject: [openstack-dev] [designate] designate 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for designate for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/designate/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/designate/log/?h=stable/queens Release notes for designate can be found at: http://docs.openstack.org/releasenotes/designate/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/designate and tag it *queens-rc-potential* to bring it to the designate release crew's attention. From no-reply at openstack.org Thu Feb 8 20:36:11 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:36:11 -0000 Subject: [openstack-dev] [trove] trove 9.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for trove for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/trove/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/trove/log/?h=stable/queens Release notes for trove can be found at: http://docs.openstack.org/releasenotes/trove/ From no-reply at openstack.org Thu Feb 8 20:36:14 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:36:14 -0000 Subject: [openstack-dev] [trove] trove-dashboard 10.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for trove-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/trove-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/trove-dashboard/log/?h=stable/queens Release notes for trove-dashboard can be found at: http://docs.openstack.org/releasenotes/trove-dashboard/ From no-reply at openstack.org Thu Feb 8 20:36:29 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:36:29 -0000 Subject: [openstack-dev] [designate] designate-dashboard 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for designate-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/designate-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/designate-dashboard/log/?h=stable/queens Release notes for designate-dashboard can be found at: http://docs.openstack.org/releasenotes/designate-dashboard/ From no-reply at openstack.org Thu Feb 8 20:38:36 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:38:36 -0000 Subject: [openstack-dev] [searchlight] searchlight 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for searchlight for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/searchlight/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/searchlight/log/?h=stable/queens Release notes for searchlight can be found at: http://docs.openstack.org/releasenotes/searchlight/ From no-reply at openstack.org Thu Feb 8 20:38:49 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:38:49 -0000 Subject: [openstack-dev] [searchlight] searchlight 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for searchlight for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/searchlight/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/searchlight/log/?h=stable/queens Release notes for searchlight can be found at: http://docs.openstack.org/releasenotes/searchlight/ From no-reply at openstack.org Thu Feb 8 20:38:55 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 20:38:55 -0000 Subject: [openstack-dev] [congress] congress 7.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for congress for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/congress/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/congress/log/?h=stable/queens Release notes for congress can be found at: http://docs.openstack.org/releasenotes/congress/ From no-reply at openstack.org Thu Feb 8 21:12:53 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:12:53 -0000 Subject: [openstack-dev] [congress] congress-dashboard 2.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for congress-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/congress-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/congress-dashboard/log/?h=stable/queens Release notes for congress-dashboard can be found at: http://docs.openstack.org/releasenotes/congress-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/congress and tag it *queens-rc-potential* to bring it to the congress-dashboard release crew's attention. From prometheanfire at gentoo.org Thu Feb 8 21:20:54 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 8 Feb 2018 15:20:54 -0600 Subject: [openstack-dev] [requirements] more help needed Message-ID: <20180208212054.m4dtmlqwwifrh5ff@gentoo.org> The following do not have a stable/queens branch and could cause requirements to remain frozen until they do. If I get no response by tomorrow afternoon my time (about 24 horus from the time this email was SENT) I may still move to unfreeze requirements. [tricircle] tricircle [heat] heat-agents There are other projects needing a stable/queens branch as well, but those are the main two. Below you'll find all the non-cycle-trailing projects that do requirements syncs without stable/queens branches. Projects without team or release model could not be found in openstack/releases for queens openstack/almanach openstack/compute-hyperv openstack/ekko openstack/gce-api openstack/glare openstack/ironic-staging-drivers openstack/kosmos openstack/mixmatch openstack/mogan openstack/nemesis openstack/networking-dpm openstack/networking-hpe openstack/networking-l2gw openstack/nova-dpm openstack/nova-lxd openstack/os-xenapi openstack/python-glareclient openstack/python-kingbirdclient openstack/python-moganclient openstack/python-oneviewclient openstack/python-valenceclient openstack/swauth openstack/tap-as-a-service openstack/trio2o openstack/valence openstack/vmware-nsx openstack/vmware-nsxlib openstackclient OpenStackClient os-service-types OpenStackSDK anchor Security ec2-api ec2-api magnum-ui magnum manila-image-elements manila masakari masakari masakari-monitors masakari python-masakariclient masakari tacker tacker tacker-horizon tacker Repos with type: horizon-plugin blazar-dashboard blazar heat-dashboard heat manila-ui manila senlin-dashboard senlin solum-dashboard solum Repos with type: other heat-agents heat networking-hyperv winstackers Repos with type: service tricircle tricircle -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Thu Feb 8 21:22:03 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 8 Feb 2018 15:22:03 -0600 Subject: [openstack-dev] [requirements][tricircle][heat] more help needed In-Reply-To: <20180208212054.m4dtmlqwwifrh5ff@gentoo.org> References: <20180208212054.m4dtmlqwwifrh5ff@gentoo.org> Message-ID: <20180208212203.swhykcty2ecdts2m@gentoo.org> On 18-02-08 15:20:54, Matthew Thode wrote: > The following do not have a stable/queens branch and could cause > requirements to remain frozen until they do. If I get no response by > tomorrow afternoon my time (about 24 horus from the time this email was > SENT) I may still move to unfreeze requirements. > > [tricircle] > tricircle > > [heat] > heat-agents > > There are other projects needing a stable/queens branch as well, but > those are the main two. Below you'll find all the non-cycle-trailing > projects that do requirements syncs without stable/queens branches. > > > Projects without team or release model could not be found in openstack/releases for queens > openstack/almanach > openstack/compute-hyperv > openstack/ekko > openstack/gce-api > openstack/glare > openstack/ironic-staging-drivers > openstack/kosmos > openstack/mixmatch > openstack/mogan > openstack/nemesis > openstack/networking-dpm > openstack/networking-hpe > openstack/networking-l2gw > openstack/nova-dpm > openstack/nova-lxd > openstack/os-xenapi > openstack/python-glareclient > openstack/python-kingbirdclient > openstack/python-moganclient > openstack/python-oneviewclient > openstack/python-valenceclient > openstack/swauth > openstack/tap-as-a-service > openstack/trio2o > openstack/valence > openstack/vmware-nsx > openstack/vmware-nsxlib > openstackclient OpenStackClient > os-service-types OpenStackSDK > anchor Security > ec2-api ec2-api > magnum-ui magnum > manila-image-elements manila > masakari masakari > masakari-monitors masakari > python-masakariclient masakari > tacker tacker > tacker-horizon tacker > > Repos with type: horizon-plugin > blazar-dashboard blazar > heat-dashboard heat > manila-ui manila > senlin-dashboard senlin > solum-dashboard solum > > Repos with type: other > heat-agents heat > networking-hyperv winstackers > > Repos with type: service > tricircle tricircle > adding tags... -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From no-reply at openstack.org Thu Feb 8 21:27:49 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:27:49 -0000 Subject: [openstack-dev] [glance] glance 16.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for glance for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/glance/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/glance/log/?h=stable/queens Release notes for glance can be found at: http://docs.openstack.org/releasenotes/glance/ From prometheanfire at gentoo.org Thu Feb 8 21:29:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 8 Feb 2018 15:29:53 -0600 Subject: [openstack-dev] [requirements][kolla][openstack-ansible][puppet][tripleo] requirements unfreeze and you, how you should handle it Message-ID: <20180208212953.6y4ucfov7n4vlce6@gentoo.org> As the title states, cycle trailing projects will need to change their requirements update behavior until they create stable/queens branches. When requirements unfreezes we will be doing rocky work, meaning that requirements updates to your projects (our master to your master) will be for rocky. I requests that all the projects tagged in the email's subject get a +1 from a requirements core before merging until they branch stable/queens. Once they branch stable/queens the projects are free to proceed as normal. If the projects tagged in the subject can ack me (email or irc) I'd appreciate it, would give us some peace of mind to unfreeze tomorrow. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From no-reply at openstack.org Thu Feb 8 21:32:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:32:47 -0000 Subject: [openstack-dev] [freezer] freezer 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for freezer for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/freezer/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/freezer/log/?h=stable/queens Release notes for freezer can be found at: http://docs.openstack.org/releasenotes/freezer/ From no-reply at openstack.org Thu Feb 8 21:33:20 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:33:20 -0000 Subject: [openstack-dev] [freezer] freezer-web-ui 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for freezer-web-ui for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/freezer-web-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/freezer-web-ui/log/?h=stable/queens Release notes for freezer-web-ui can be found at: http://docs.openstack.org/releasenotes/freezer-web-ui/ From no-reply at openstack.org Thu Feb 8 21:39:53 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:39:53 -0000 Subject: [openstack-dev] [freezer] freezer-api 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for freezer-api for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/freezer-api/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/freezer-api/log/?h=stable/queens Release notes for freezer-api can be found at: http://docs.openstack.org/releasenotes/freezer-api/ From no-reply at openstack.org Thu Feb 8 21:40:55 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 21:40:55 -0000 Subject: [openstack-dev] [freezer] freezer-dr 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for freezer-dr for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/freezer-dr/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/freezer-dr/log/?h=stable/queens Release notes for freezer-dr can be found at: http://docs.openstack.org/releasenotes/freezer-dr/ From gcerami at redhat.com Thu Feb 8 22:43:56 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Thu, 8 Feb 2018 22:43:56 +0000 Subject: [openstack-dev] [reno] an alternative approach to known issues Message-ID: <20180208224356.os3z5qqqcvo53xtp@localhost> Hi, sometimes it happens, while reviewing a patch, to find an issue that is not quite a bug, because it doesn't limit functionality, but may represent a problem in some corner case, or with some possible future modification in some component involved in the patch; it may best be described as a weakness in the code, which may happen only under certain circumstances. The author, for some time or complexity constraint is creating a technical debt, or making a micro design choice. How to keep track of the issue ? How, after 6 month when there's time and bandwidth to look at the problem again, can this note be found and issue dealt in the way it deserves ? How to help prioritize then the list of issues left behind during the duration of a release ? Nobody is going to read all the comments on all the merged patches in the past months, to find all the objections. Also technical debts cannot be treated like bugs, because they have a different life span. A bug is opened and closed for good after a while. A technical debt may be carried for long time, and it would be perfectly natural to mark it as something to just live with, and pay the interest for, because the time required to solve it it's not worth it. And despite that, it's important to keep track of them because an eventual reevaluation of the interests cost or a change in the surroundings (a new requirement that breaks an assumption) may lead to a different decision after some time. The way technical debts are treated right now officially is by adding a TODO note inside the code, or maybe adding a "issue" field in release notes. I would like to expand this TODO note, and the known issue field, make it become something more structured. I thought about reno, to create a technical debt register/micro design document. A developer would generate a UUID, put on the code a comment # TD: and then add the description in reno. A simple yaml associative array with three or four keys: UUID, description, consequences, options, which may describe either the problem or the micro design choice and assumption without which the code may show these weaknesses. The description would stay with the code, submitted with the same patch with which it was introduced. Then when it's time, a report on all these description could be created to evaluate, prioritize and eventually close the gap that was created, or just mark that as "prefer to just deal with the consequences" One may later incur in a problem a number of times, find the piece of code responsible, and see that the problem is know, and immediately raise its impact to request a reevaluation. Or we may realize that the code that creates a certain amount of weaknesses is going to be deleted, and we can close all the items related to it. The creation and handling of such items could add too much of a burden to the developer, for these reasons, I would prefer to automate some part of the creation, for example the UUID generation, date expansion, status change on the item. I used this, to try out how this automation could work https://review.openstack.org/538233 which could add basic logic to the templates, to automate some of the tasks. This idea certainly requires refinement (for example what happens when the weakness is discovered at a later time), but I would like to understand if it's possible to use reno for this approach. Any feedback would be highly appreciated. Thanks From openstack at nemebean.com Thu Feb 8 23:07:49 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Feb 2018 17:07:49 -0600 Subject: [openstack-dev] [reno][tripleo] an alternative approach to known issues In-Reply-To: <20180208224356.os3z5qqqcvo53xtp@localhost> References: <20180208224356.os3z5qqqcvo53xtp@localhost> Message-ID: <7462d4c0-5c42-7586-4a0c-a18cfd63669b@nemebean.com> So TripleO has a tech debt policy: https://specs.openstack.org/openstack/tripleo-specs/specs/policy/tech-debt-tracking.html (and I'm tagging tripleo on this thread for visibility). It essentially comes down to: open a bug, tag it "tech-debt", and reference it in the code near the tech debt. I kind of like that approach because it makes use of the existing integration between Gerrit and Launchpad, and we don't have to invent a new system for triaging tech debt. It just gets treated as the appropriate level bug. I guess my question then would be whether there is sufficient advantage to inventing a new system in Reno when we already have systems in place that seem suited to this. I have a few specific thoughts below too. On 02/08/2018 04:43 PM, Gabriele Cerami wrote: > Hi, > > sometimes it happens, while reviewing a patch, to find an issue that > is not quite a bug, because it doesn't limit functionality, but > may represent a problem in some corner case, or with some possible > future modification in some component involved in the patch; it may > best be described as a weakness in the code, which may happen only under > certain circumstances. > The author, for some time or complexity constraint is creating a > technical debt, or making a micro design choice. > > How to keep track of the issue ? How, after 6 month when there's time > and bandwidth to look at the problem again, can this note be found and > issue dealt in the way it deserves ? > How to help prioritize then the list of issues left behind during the > duration of a release ? > Nobody is going to read all the comments on all the merged patches in > the past months, to find all the objections. > Also technical debts cannot be treated like bugs, because they have a > different life span. A bug is opened and closed for good after a while. I'm not sure I agree. Bugs stay open until they are fixed/won't fixed. Tech debt stays open until it is fixed/won't fixed. We've had bugs open for years for things that are tricky to fix. Arguably those are tech debt too, but in any case I'm not aware of any problems with using the bug tracker to manage them. > A technical debt may be carried for long time, and it would be perfectly > natural to mark it as something to just live with, and pay the interest > for, because the time required to solve it it's not worth it. And > despite that, it's important to keep track of them because an eventual > reevaluation of the interests cost or a change in the surroundings (a > new requirement that breaks an assumption) may lead to a different > decision after some time. > > The way technical debts are treated right now officially is by adding a > TODO note inside the code, or maybe adding a "issue" field in release > notes. > I would like to expand this TODO note, and the known issue field, > make it become something more structured. > I thought about reno, to create a technical debt register/micro design > document. > A developer would generate a UUID, put on the code a comment > > # TD: > > and then add the description in reno. A simple yaml associative array > with three or four keys: UUID, description, consequences, options, which > may describe either the problem or the micro design choice and > assumption without which the code may show these weaknesses. > The description would stay with the code, submitted with the same > patch with which it was introduced. Then when it's time, a report on all > these description could be created to evaluate, prioritize and > eventually close the gap that was created, or just mark that as "prefer > to just deal with the consequences" > > One may later incur in a problem a number of times, find the piece of > code responsible, and see that the problem is know, and immediately > raise its impact to request a reevaluation. > Or we may realize that the code that creates a certain amount of > weaknesses is going to be deleted, and we can close all the items > related to it. > > The creation and handling of such items could add too much of a burden > to the developer, for these reasons, I would prefer to automate some > part of the creation, for example the UUID generation, date expansion, > status change on the item. > > I used this, to try out how this automation could work > > https://review.openstack.org/538233 > > which could add basic logic to the templates, to automate some of the > tasks. > > This idea certainly requires refinement (for example what happens when > the weakness is discovered at a later time), but I would like to > understand if it's possible to use reno for this approach. Any feedback > would be highly appreciated. I'm kind of split on the idea of templates for Reno. On the one hand I could see it being useful for complex things, but on the other I wonder if something complex enough to require a template actually belongs in release notes or if it should go in formal documentation. From mitchell at arista.com Thu Feb 8 17:23:11 2018 From: mitchell at arista.com (Mitchell Jameson) Date: Thu, 8 Feb 2018 09:23:11 -0800 Subject: [openstack-dev] [ironic][neutron] bare metal on vxlan network In-Reply-To: References: Message-ID: Hi Moshe, I'm not aware of any mechanism drivers that actually create the VTEP (that will be a manual step,) but there are drivers that support Hierarchical Port Binding [1]. The networking-arista mechanism driver [2] is one such driver, but there are others. Such drivers will configure a VLAN to VNI mapping on switches based on the segmentation IDs of the neutron network segments bound at each port binding level. In such a deployment, you'd create a VXLAN network in neutron. You could then launch a baremetal instance on that network. The mechanism driver will then be responsible for dynamically allocating a VLAN for the baremetal<->TOR switch segment and mapping that VLAN to the VXLAN network's VNI on the TOR switch such that the baremetal instance is connected to the VXLAN fabric segment. [1] https://specs.openstack.org/openstack/neutron-specs/ specs/kilo/ml2-hierarchical-port-binding.html [2] https://github.com/openstack/networking-arista On Wed, Feb 7, 2018 at 10:06 PM, Moshe Levi wrote: > Hi all, > > > > Ironic supports mutli tenancy for quite few releases and according to the > spec [1] it can work with vlan/vxlan networks. > > I see lot of mechanism driver that support vlan network such as [2] and > [3] , but I didn't find any mechanism driver that work on vxlan network. > > Is there a mechanism driver that can configure vtep on a switch exist for > the bare metal? > > > > Help would be appreciated > > > > > > [1] https://specs.openstack.org/openstack/ironic-specs/specs/ > not-implemented/ironic-ml2-integration.html > > [2] https://github.com/openstack/networking-arista > > [3] https://github.com/openstack/networking-generic-switch > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Feb 8 23:23:25 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 8 Feb 2018 16:23:25 -0700 Subject: [openstack-dev] [tripleo] Unbranched repositories and testing In-Reply-To: References: <4165b44a-820b-d025-673a-d3c37d1b6eb1@redhat.com> Message-ID: On Tue, Oct 10, 2017 at 2:24 PM, Emilien Macchi wrote: > On Fri, Oct 6, 2017 at 5:09 AM, Jiří Stránský wrote: >> On 5.10.2017 22:40, Alex Schultz wrote: >>> >>> Hey folks, >>> >>> So I wandered across the policy spec[0] for how we should be handling >>> unbranched repository reviews and I would like to start a broader >>> discussion around this topic. We've seen it several times over the >>> recent history where a change in oooqe or tripleo-ci ends up affecting >>> either a stable branch or an additional set of jobs that were not run >>> on the change. I think it's unrealistic to run every possible job >>> combination on every submission and it's also a giant waste of CI >>> resources. I also don't necessarily agree that we should be using >>> depends-on to prove things are fine for a given patch for the same >>> reasons. That being said, we do need to minimize our risk for patches >>> to these repositories. >>> >>> At the PTG retrospective I mentioned component design structure[1] as >>> something we need to be more aware of. I think this particular topic >>> is one of those types of things where we could benefit from evaluating >>> the structure and policy around these unbranched repositories to see >>> if we can improve it. Is there a particular reason why we continue to >>> try and support deployment of (at least) 3 or 4 different versions >>> within a single repository? Are we adding new features that really >>> shouldn't be consumed by these older versions such that perhaps it >>> makes sense to actually create stable branches? Perhaps there are >>> some other ideas that might work? >> >> >> Other folks probably have a better view of the full context here, but i'll >> chime in with my 2 cents anyway.. >> >> I think using stable branches for tripleo-quickstart-extras could be worth >> it. The content there is quite tightly coupled with the expected TripleO >> end-user workflows, which tend to evolve considerably between releases. >> Branching extras might be a good way to "match the reality" in that sense, >> and stop worrying about breaking older workflows. (Just recently it came up >> that the upgrade workflow in O is slightly updated to make it work in P, and >> will change quite a bit for Q. Minor updates also changed between O and P.) >> >> I'd say that tripleo-quickstart is a different story though. It seems fairly >> release-agnostic in its focus. We may want to keep it unbranched (?). That >> probably applies even more for tripleo-ci, where ability to make changes >> which affect how TripleO does CIing in general, across releases, is IMO a >> significant feature. >> >> Maybe branching quickstart-extras might require some code reshuffling >> between what belongs there and what belongs into quickstart itself. > > I agree a lot with Jirka and I think branching oooq-extras would be a > good first start to see how it goes. > If we find it helpful and working correctly, we could go the next > steps and see if there is any other repo that could be branched > (tripleo-ci or oooq) but I guess for now the best candidate is > oooq-extras. > I'm resurrecting this thread as we seemed to have done it again[0] with a change oooq-extras master breaking stable/pike. So I would propose that we start investigating branching oooq-extras. Does anyone see any blocking issues with starting to branch this repository? Thanks, -Alex [0] https://bugs.launchpad.net/tripleo/+bug/1748315 >> (Just my 2 cents, i'm likely not among the most important stakeholders in >> this...) >> >> Jirka >> >> >>> >>> Thanks, >>> -Alex >>> >>> [0] https://review.openstack.org/#/c/478488/ >>> [1] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Feb 8 23:29:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 08 Feb 2018 18:29:18 -0500 Subject: [openstack-dev] [magnum][release] release-post job for openstack/releases failed In-Reply-To: <20180208190051.GA11528@sm-xps> References: <20180208190051.GA11528@sm-xps> Message-ID: <1518132431-sup-5783@lrrr.local> Excerpts from Sean McGinnis's message of 2018-02-08 13:00:52 -0600: > The release job for magnum failed, but luckily it was after tagging and > branching the release. It was not able to get to the point of uploading a > tarball to http://tarballs.openstack.org/magnum/ though. > > The problem the job encountered is that magnum is now configured to publish to > Pypi. The tricky part ends up being that the "magnum" package on Pypi is not > this magnum project. It appears to be an older abandoned project by someone, > not related to OpenStack. > > There is an openstack-magnum registered. But since the setup.cfg file in > openstack/magnum has "name = magnum", it attempts to publish to the one that is > not ours. > > I have put up a patch to openstack/magnum to change the name to > openstack-magnum here: > > https://review.openstack.org/#/c/542371/ > > That, or something like it, will need to merge and be backported to > stable/queens before we can get this project published. > > If there are any questions, please feel free to drop in to the > #openstack-release channel. > > Thanks, > Sean Another alternative is to change the job configuration for magnum to use release-openstack-server instead of publish-to-pypi, at least for the near term. That would give the magnum team more time to make the changes need to modify the sdist name for the package. Doug From aschultz at redhat.com Thu Feb 8 23:41:22 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 8 Feb 2018 16:41:22 -0700 Subject: [openstack-dev] [requirements][kolla][openstack-ansible][puppet][tripleo] requirements unfreeze and you, how you should handle it In-Reply-To: <20180208212953.6y4ucfov7n4vlce6@gentoo.org> References: <20180208212953.6y4ucfov7n4vlce6@gentoo.org> Message-ID: On Thu, Feb 8, 2018 at 2:29 PM, Matthew Thode wrote: > As the title states, cycle trailing projects will need to change their > requirements update behavior until they create stable/queens branches. > > When requirements unfreezes we will be doing rocky work, meaning that > requirements updates to your projects (our master to your master) will > be for rocky. > > I requests that all the projects tagged in the email's subject get a +1 > from a requirements core before merging until they branch stable/queens. > For clarity: before merging requirements updates. So for TripleO folks please do not merge any requirements updates unless the requirements cores have +1'd or we've branched Queens. > Once they branch stable/queens the projects are free to proceed as > normal. > > If the projects tagged in the subject can ack me (email or irc) I'd > appreciate it, would give us some peace of mind to unfreeze tomorrow. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Thu Feb 8 23:42:48 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Feb 2018 23:42:48 +0000 Subject: [openstack-dev] [magnum][release] release-post job for openstack/releases failed In-Reply-To: <1518132431-sup-5783@lrrr.local> References: <20180208190051.GA11528@sm-xps> <1518132431-sup-5783@lrrr.local> Message-ID: <20180208234247.scinq6x6jstloknv@yuggoth.org> On 2018-02-08 18:29:18 -0500 (-0500), Doug Hellmann wrote: [...] > Another alternative is to change the job configuration for magnum to use > release-openstack-server instead of publish-to-pypi, at least for the > near term. That would give the magnum team more time to make the changes > need to modify the sdist name for the package. And yet another (longer-term) alternative is: https://www.python.org/dev/peps/pep-0541/#removal-of-an-abandoned-project We're presently trying the same to gain use of the keystone name on PyPI, and magnum's the only other service we have in that same boat as far as I'm aware. In both cases the projects have basically been dead for half a decade (and in the magnum case they never even seem to have uploaded an initial package at all). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From no-reply at openstack.org Thu Feb 8 23:43:56 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 08 Feb 2018 23:43:56 -0000 Subject: [openstack-dev] [manila] manila 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for manila for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/manila/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/manila/log/?h=stable/queens Release notes for manila can be found at: http://docs.openstack.org/releasenotes/manila/ From no-reply at openstack.org Fri Feb 9 00:01:59 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 00:01:59 -0000 Subject: [openstack-dev] [searchlight] searchlight-ui 4.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for searchlight-ui for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/searchlight-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/searchlight-ui/log/?h=stable/queens Release notes for searchlight-ui can be found at: http://docs.openstack.org/releasenotes/searchlight-ui/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/searchlight and tag it *queens-rc-potential* to bring it to the searchlight-ui release crew's attention. From gcerami at redhat.com Fri Feb 9 01:42:14 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Fri, 9 Feb 2018 01:42:14 +0000 Subject: [openstack-dev] [reno][tripleo] an alternative approach to known issues In-Reply-To: <7462d4c0-5c42-7586-4a0c-a18cfd63669b@nemebean.com> References: <20180208224356.os3z5qqqcvo53xtp@localhost> <7462d4c0-5c42-7586-4a0c-a18cfd63669b@nemebean.com> Message-ID: <20180209014214.43ik5scukozb6hig@localhost> On 08 Feb, Ben Nemec wrote: > So TripleO has a tech debt policy: https://specs.openstack.org/openstack/tripleo-specs/specs/policy/tech-debt-tracking.html > (and I'm tagging tripleo on this thread for visibility). I didn't know about this policy. I've been circling around tech debts for more than a month now, and nobody pointed me to it either. Anyway, I find it insufficient. Not specifically the tracking method, but more the guidelines and the example, to understand how to use it correctly. Doing some basic research, I see that in tripleo 31 bugs were marked with tech-debt tag. 15 Were closed, but they were also marked as CRITICAL. This does not match my definition of tech-debt. Of the remaining 16 sometimes it's hard to understand which part is the technical debt, some are really new features requests matching more the feeling "we may have needed to think about this months ago during the design", for some it's just "we don't have a clear idea of what to do" and the rest is "here's a bandaid, we'll think about it later" The policy lacks a definition of what is a technical debt. I understand the issue as it's really difficult to find a unique definition that fits all we want to include. Whatever the definition we want it to be, there are at least three things that I want to see in tech debt bug (or report), and they all try to focus on the "debt" part of the whole "tech debt" concept. - What's the cost of the repayment - What's the cost of the interests - What's the frequency of the interests For me a technical debt is an imperfect implementation that has consequences. Describable and maybe measurable consequences. "I'm using list in this case for simplicity but if we add more items, we may need a more efficient structure, because it will become too slow" The cost of the repayment is the time spent to replace the structure and its methods with something more complex The cost of the interests is the speed lost when the list increases The frequency of the interests is "this list will become very big every three hours" Without these three elements it becomes hard to understand if we want to really repay the debt, and how we prioritize the repayments. Since a tech debt is something that I find really related to the code (Which piece or line of code is the one that has these measurale consequences) I'd really like for the report to be as close as possible to the code. Also sometimes it may just become a design choice based on assumptions. "I know the list is not efficient, but we'll rarely get it big often, and we are sure to clear it out almost immediately" We can maybe discuss further the advantages of the existing bug tracking for the handling of these reports. > I'm not sure I agree. Bugs stay open until they are fixed/won't fixed. Tech > debt stays open until it is fixed/won't fixed. We've had bugs open for > years for things that are tricky to fix. Arguably those are tech debt too, > but in any case I'm not aware of any problems with using the bug tracker to > manage them. Remember the "debt" in "technical debt". You're not reporting it correctly if you don't measure the consequences. I don't think the report should really be about the problem or the solution, because then you're really only talking about the full repayment. Of course without any description on the consequences, the tech debt may be equated to a bug, you really have a problem and you want to discuss only its solution. Another difference is that the importance of a bug rarely changes over time, once correctly triaged. With the technical debt instead - A won't fix doesn't mean that the interests are gone. You closed the bug/tech debt and you are not counting the interests anymore. Convenient and deceiving. There is no status currently that could put the bug on hold. Removing it from all the short term consideration, but make it still count for its interests, make it possible to consider and reevaluate at any time. - A tech debt really can get more and more costly to repay. If someone else implement something over you "imperfect" code, the cost of the repayment just doubled, because you have to fix a stack of code now. Marking the code with a # TD may warn someone "be aware that someone is trying to build over a problem" - The frequency of interests may increase also over time, and the importance may raise as we are paying too much interests, and may be better to start considering full repayment. - One of the solution to a technical debt is "conversion": you just render the imperfect solution just less imperfect, that is you don't fully repay it, you repay just a little to lower the interests cost or frequency. It's not a workaround, it's not a fix, you're just reducing its impact. How do you report that in a bug tracking system ? > I'm kind of split on the idea of templates for Reno. On the one hand I > could see it being useful for complex things, but on the other I wonder if > something complex enough to require a template actually belongs in release > notes or if it should go in formal documentation. The template part would be just an aid to the developers, but I certainly see the possibility for this solution to overgrow and start doing something for which reno was not created for. That's why I'm looking for feedback. From rosmaita.fossdev at gmail.com Fri Feb 9 03:07:36 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 8 Feb 2018 22:07:36 -0500 Subject: [openstack-dev] [glance] priorities for the coming week (9 Feb - 15 Feb) Message-ID: Congratulations to Erno for his election as the Rocky PTL! Erno is taking over PTG planning, so don't forget to add ideas to the planning etherpad: https://etherpad.openstack.org/p/glance-rocky-ptg-planning The first Release Candidate for the Queens edition of Glance was released today and the stable/queens branch was cut. Our focus is still on Queens this week. It would be a good idea to give RC-1 a workout, paying particular attention to interoperable image import and the glance-manage and glance-scrubber tools, which underwent some refactoring and enhancements this cycle. We'll continue to track the development toward RC-2 on this etherpad: https://etherpad.openstack.org/p/glance-queens-rc1-patches Add any bugs you find that are release critical to that etherpad so that the core team can verify that they need to be backported to stable/queens. There will definitely be an RC-2, which will contain whatever we come up with to address https://bugs.launchpad.net/glance/+bug/1747869 But don't let that stop you from testing out RC-1! cheers, brian From tony at bakeyournoodle.com Fri Feb 9 03:35:18 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 9 Feb 2018 14:35:18 +1100 Subject: [openstack-dev] [Release-job-failures] release-post job for openstack/releases failed In-Reply-To: References: Message-ID: <20180209033518.GQ23143@thor.bakeyournoodle.com> On Fri, Feb 09, 2018 at 03:28:23AM +0000, zuul at openstack.org wrote: > Build failed. > > - tag-releases http://logs.openstack.org/bd/bd802368fe546a891b89f78fec89d3ea9964c155/release-post/tag-releases/ffc68e7/ : TIMED_OUT in 32m 19s > - publish-static publish-static : SKIPPED Can we please re-run these jobs. The tag is on git.o.o but none of the publish/email steps happened. Looks like a network timeout? http://logs.openstack.org/bd/bd802368fe546a891b89f78fec89d3ea9964c155/release-post/tag-releases/ffc68e7/job-output.txt.gz#_2018-02-09_03_27_47_445990 Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From stendulker at gmail.com Fri Feb 9 04:35:53 2018 From: stendulker at gmail.com (Shivanand Tendulker) Date: Fri, 9 Feb 2018 10:05:53 +0530 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: +1 On Wed, Feb 7, 2018 at 11:53 AM, John Villalovos wrote: > +1 > > On Mon, Feb 5, 2018 at 10:12 AM, Julia Kreger > wrote: > >> I would like to nominate Hironori Shiina to ironic-core. He has been >> working in the ironic community for some time, and has been helping >> over the past several cycles with more complex features. He has >> demonstrated an understanding of Ironic's code base, mechanics, and >> overall community style. His review statistics are also extremely >> solid. I personally have a great deal of trust in his reviews. >> >> I believe he would make a great addition to our team. >> >> Thanks, >> >> -Julia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Fri Feb 9 04:59:58 2018 From: iwienand at redhat.com (Ian Wienand) Date: Fri, 9 Feb 2018 15:59:58 +1100 Subject: [openstack-dev] [Release-job-failures] release-post job for openstack/releases failed In-Reply-To: <20180209033518.GQ23143@thor.bakeyournoodle.com> References: <20180209033518.GQ23143@thor.bakeyournoodle.com> Message-ID: <8937a178-d0ed-89eb-4e5b-d90db04161aa@redhat.com> On 02/09/2018 02:35 PM, Tony Breeds wrote: >> - tag-releases http://logs.openstack.org/bd/bd802368fe546a891b89f78fec89d3ea9964c155/release-post/tag-releases/ffc68e7/ : TIMED_OUT in 32m 19s >> - publish-static publish-static : SKIPPED > > Can we please re-run these jobs. Done with [1] -i [1] http://logs.openstack.org/bd/bd802368fe546a891b89f78fec89d3ea9964c155/release-post/tag-releases/2cdfded/ From liyi8611 at gmail.com Fri Feb 9 05:22:34 2018 From: liyi8611 at gmail.com (Lee Yi) Date: Fri, 9 Feb 2018 13:22:34 +0800 Subject: [openstack-dev] [Senlin] [PTL] PTL nomination for Senlin In-Reply-To: References: <201801311558218822444@zte.com.cn> Message-ID: <3703ED3B-2C3E-4AA3-98F2-79CADF3B47E1@gmail.com> +1 ----------------------------------- Lee Yi / Fiberhome Corp. liyi8611 at gmail.com > On 3 Feb 2018, at 12:38, YUAN RUIJIE wrote: > > +1 > Thanks for taking up responsibility to lead the team!!! > > 2018-01-31 15:58 GMT+08:00 >: > > > Hi all > > I'd like to announce my candidacy for the PTL role of Senlin Project for > > Rocky cycle. > > > > I began to contribute to Senlin project since Mitaka and joined the team as > > a core reviewer in 2016.10. It is my pleasure to work with the great team > > to make this project better and better, and we will keep moving and look > > forward to push Senlin to the next level. > > > > As a clustering service, we already can handle some resource types like nova > > server, heat stack, NFV VDU etc. in past cycles. We also have done a lot of > > great works in Queue cycle, for example we finished k8s on Senlin feature's > > demo[1][2][3][4]. And there are still many works need to do in future. > > > > As a PTL in Rocky cycle, I'd like to focus on the tasks as follows: > > > > * Promote k8s on Senlin feature implementation and make it use in NFV > > For example: > > - Add ability to do actions on cluster creation/deletion. > > - Add more network interfaces in drivers. > > - Add kubernetes master profile, use kubeadm to setup one master node. > > - Add kubernetes node profile, auto retrieve kubernetes data from master > > cluster. > > * Improve health policy to support more useful auto-healing scenario > > * Improve LoadBalance policy when use Octavia service driver > > * Improve runtime data processing inside Senlin server > > * A better support for EDGE-Computing unattended operation use cases[5] > > * A stronger team to take the Senlin project to its next level. > > > > Again, it is my pleasure to work with such a great team. > > > > Thanks > > XueFeng Liu > > > > [1]https://review.openstack.org/#/c/515321/ > [2]https://v.qq.com/x/page/i05125sfonh.html > [3]https://v.qq.com/x/page/t0512vo6tw1.html > [4]https://v.qq.com/x/page/y0512ehqiiq.html > [5]https://www.openstack.org/videos/boston-2017/integration-of-enterprise-monitoring-product-senlin-and-mistral-for-auto-healing > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Feb 9 07:44:08 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 07:44:08 -0000 Subject: [openstack-dev] [nova] nova 17.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for nova for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/nova/log/?h=stable/queens Release notes for nova can be found at: http://docs.openstack.org/releasenotes/nova/ From no-reply at openstack.org Fri Feb 9 07:49:14 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 07:49:14 -0000 Subject: [openstack-dev] [horizon] horizon 13.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for horizon for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/horizon/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/queens Release notes for horizon can be found at: http://docs.openstack.org/releasenotes/horizon/ From no-reply at openstack.org Fri Feb 9 07:51:18 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 07:51:18 -0000 Subject: [openstack-dev] [murano] murano 5.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for murano for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/murano/log/?h=stable/queens Release notes for murano can be found at: http://docs.openstack.org/releasenotes/murano/ From no-reply at openstack.org Fri Feb 9 08:02:41 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 08:02:41 -0000 Subject: [openstack-dev] [murano] murano-dashboard 5.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for murano-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/murano-dashboard/log/?h=stable/queens Release notes for murano-dashboard can be found at: http://docs.openstack.org/releasenotes/murano-dashboard/ From sam47priya at gmail.com Fri Feb 9 08:21:26 2018 From: sam47priya at gmail.com (Sam P) Date: Fri, 9 Feb 2018 17:21:26 +0900 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: <7921c7b2-8841-86bb-abe3-09f64d624208@redhat.com> References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> <20180207213102.GK23143@thor.bakeyournoodle.com> <7921c7b2-8841-86bb-abe3-09f64d624208@redhat.com> Message-ID: Hi Matthew, Thanks for the info. For Masakari, after discussing with release team, all following 3 project will do independent release for Queens. masakari masakari ​​ masakari-monitors masakari python-masakariclient masakari Still need 3-4 days to create stable/queens for masakari and masakari-monitors and python-masakariclinet can be done late today. I will create stable/queens as soon as we merged the required patches. Requirement update will unfreeze soon, and we will hold the requirement updates till we create stable/queens. --- Regards, Sampath On Thu, Feb 8, 2018 at 7:33 PM, Dmitry Tantsur wrote: > On 02/07/2018 10:31 PM, Tony Breeds wrote: > >> On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: >> >> Okay It's safe to ignore then ;P We should probably remove it from >>> projects.txt if it really is empty I'll propose that. >>> >> >> Oh my bad, ironic-python-agent-builder was included as it's included as >> an ironic project[1] NOT because it;s listed in projects.txt. Given >> that it's clearly not for me to remove anything. >> >> Having said that if the project hasn't had any updates at all since it's >> creation in July 2017 perhaps it's no longer needed and could be >> removed? >> > > We do plan to use it, we just never had time to populate it :( > > >> Yours Tony. >> >> [1] http://git.openstack.org/cgit/openstack/governance/tree/refe >> rence/projects.yaml#n1539 >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.mp at gmail.com Fri Feb 9 08:28:56 2018 From: andrey.mp at gmail.com (Andrey Pavlov) Date: Fri, 9 Feb 2018 11:28:56 +0300 Subject: [openstack-dev] [requirements] more help needed Message-ID: Hi Matthew, stable/queens has been created for ec2-api project. project gce-api is mostly dead. Regards, Andrey Pavlov. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Feb 9 10:57:26 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 9 Feb 2018 11:57:26 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, February 9th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Goal updates: neutron, vitrage * New repo: ansible-role-k8s-cinder Not much got approved this week, as the focus is on Queens release candidate and PTG preparation. == PTG preparation == The Dublin PTG will start in 17 days ! The schedule is now frozen at: https://www.openstack.org/ptg#tab_schedule That said, we have lots of openly-reservable rooms for last-minute topics or continuing discussions beyond the allocated time. Track leads have set up a number of etherpads to openly brainstorm what to discuss. You can find those (or link to missing ones) here: https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads == Rocky goals == The most consensual set of goals for the Rocky cycle is still: * Remove mox [1] (chandankumar) * Enable toggling DEBUG option at runtime [2] (gcb) Those will be approved on Tuesday unless new objections are posted that result in TC members changing their votes. As a reminder, the other proposed goals were: * Storyboard Migration [3] (diablo_rojo) * Ensure pagination links [4] (mordred) * Add Cold upgrades capabilities [5] (masayuki) Additionally, dhellmann proposed using StoryBoard to track goals[6]. This also has majority support and will be approved on Tuesday unless new objections are posted. [1] https://review.openstack.org/532361 [2] https://review.openstack.org/534605 [3] https://review.openstack.org/513875 [4] https://review.openstack.org/532627 [5] https://review.openstack.org/#/c/533544/ [6] https://review.openstack.org/534443 == Voting in progress == Monty proposed a resolution to dedicate the Queens release to the memory of Shawn Pearce. This is still a few votes short: https://review.openstack.org/541313 == Under discussion == A new project team was proposed to regroup people working on PowerVM support in OpenStack. It is similar in many ways to the WinStackers team (working on Hyper-V / Windows support). Please comment on the review at: https://review.openstack.org/#/c/540165/ The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, now on an active mailing-list thread. Please chime in to inform the TC choice: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html == TC member actions for the coming week(s) == In preparation for the PTG we need to do the final selection of post-lunch presentations, proposed on: https://etherpad.openstack.org/p/dublin-PTG-postlunch We also need to start collecting topics of discussion for the TC track: https://etherpad.openstack.org/p/PTG-Dublin-TC-topics Finally we need to start the S release naming process. pabelanger volunteered to lead that, and will push a release_naming.rst change proposing dates and geographic area for the name choices. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to be around final Rocky goal selection and PTG prep. Cheers, -- Thierry Carrez (ttx) From delightwook at ssu.ac.kr Fri Feb 9 12:04:24 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Fri, 9 Feb 2018 21:04:24 +0900 Subject: [openstack-dev] [OpenStack][Vitrage] .success error on vitrage-dashboard Message-ID: <01a701d3a19e$209a1570$61ce4050$@ssu.ac.kr> Hello Vitrage. I installed the vitrage and vitrage-dashboard master versions and tested them. However, an unrecognized error ('.success () is not function') occurs and all panels of the vitrage-dashboard do not appear normally. I can not figure out the cause, but I changed the .success and .error of each function to .then and .catch in dashboard / static / dashboard / projct / services / vitrage_topology.service.js. As a result of this, I have confirmed the normal operation of the vitrage-dashboard panel. What is the cause? Thanks :) Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Feb 9 13:25:24 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 13:25:24 -0000 Subject: [openstack-dev] [zaqar] zaqar-ui 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for zaqar-ui for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar-ui/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/zaqar-ui/log/?h=stable/queens Release notes for zaqar-ui can be found at: http://docs.openstack.org/releasenotes/zaqar-ui/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/zaqar-ui and tag it *queens-rc-potential* to bring it to the zaqar-ui release crew's attention. From no-reply at openstack.org Fri Feb 9 13:28:30 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 13:28:30 -0000 Subject: [openstack-dev] [zaqar] zaqar 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for zaqar for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/zaqar/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/zaqar/log/?h=stable/queens Release notes for zaqar can be found at: http://docs.openstack.org/releasenotes/zaqar/ From no-reply at openstack.org Fri Feb 9 14:03:50 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 14:03:50 -0000 Subject: [openstack-dev] [sahara] sahara-dashboard 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for sahara-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/sahara-dashboard/log/?h=stable/queens Release notes for sahara-dashboard can be found at: http://docs.openstack.org/releasenotes/sahara-dashboard/ From no-reply at openstack.org Fri Feb 9 14:08:52 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 14:08:52 -0000 Subject: [openstack-dev] [sahara] sahara-image-elements 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for sahara-image-elements for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara-image-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/sahara-image-elements/log/?h=stable/queens Release notes for sahara-image-elements can be found at: http://docs.openstack.org/releasenotes/sahara-image-elements/ From no-reply at openstack.org Fri Feb 9 14:11:03 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 14:11:03 -0000 Subject: [openstack-dev] [sahara] sahara 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for sahara for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/sahara/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/sahara/log/?h=stable/queens Release notes for sahara can be found at: http://docs.openstack.org/releasenotes/sahara/ From doug at doughellmann.com Fri Feb 9 14:25:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 09 Feb 2018 09:25:32 -0500 Subject: [openstack-dev] [reno] an alternative approach to known issues In-Reply-To: <20180208224356.os3z5qqqcvo53xtp@localhost> References: <20180208224356.os3z5qqqcvo53xtp@localhost> Message-ID: <1518186260-sup-4136@lrrr.local> Excerpts from Gabriele Cerami's message of 2018-02-08 22:43:56 +0000: > Hi, > > sometimes it happens, while reviewing a patch, to find an issue that > is not quite a bug, because it doesn't limit functionality, but > may represent a problem in some corner case, or with some possible > future modification in some component involved in the patch; it may > best be described as a weakness in the code, which may happen only under > certain circumstances. > The author, for some time or complexity constraint is creating a > technical debt, or making a micro design choice. > > How to keep track of the issue ? How, after 6 month when there's time > and bandwidth to look at the problem again, can this note be found and > issue dealt in the way it deserves ? > How to help prioritize then the list of issues left behind during the > duration of a release ? > Nobody is going to read all the comments on all the merged patches in > the past months, to find all the objections. > Also technical debts cannot be treated like bugs, because they have a > different life span. A bug is opened and closed for good after a while. > A technical debt may be carried for long time, and it would be perfectly > natural to mark it as something to just live with, and pay the interest > for, because the time required to solve it it's not worth it. And > despite that, it's important to keep track of them because an eventual > reevaluation of the interests cost or a change in the surroundings (a > new requirement that breaks an assumption) may lead to a different > decision after some time. > > The way technical debts are treated right now officially is by adding a > TODO note inside the code, or maybe adding a "issue" field in release > notes. > I would like to expand this TODO note, and the known issue field, > make it become something more structured. > I thought about reno, to create a technical debt register/micro design > document. > A developer would generate a UUID, put on the code a comment > > # TD: > > and then add the description in reno. A simple yaml associative array > with three or four keys: UUID, description, consequences, options, which > may describe either the problem or the micro design choice and > assumption without which the code may show these weaknesses. > The description would stay with the code, submitted with the same > patch with which it was introduced. Then when it's time, a report on all > these description could be created to evaluate, prioritize and > eventually close the gap that was created, or just mark that as "prefer > to just deal with the consequences" > > One may later incur in a problem a number of times, find the piece of > code responsible, and see that the problem is know, and immediately > raise its impact to request a reevaluation. > Or we may realize that the code that creates a certain amount of > weaknesses is going to be deleted, and we can close all the items > related to it. > > The creation and handling of such items could add too much of a burden > to the developer, for these reasons, I would prefer to automate some > part of the creation, for example the UUID generation, date expansion, > status change on the item. > > I used this, to try out how this automation could work > > https://review.openstack.org/538233 > > which could add basic logic to the templates, to automate some of the > tasks. > > This idea certainly requires refinement (for example what happens when > the weakness is discovered at a later time), but I would like to > understand if it's possible to use reno for this approach. Any feedback > would be highly appreciated. > > Thanks > What makes reno a good fit for this task? It seems like updating a regular documentation page in the source tree would work just as well, since presumably these technical debt descriptions don't need to be backported to stable branches. Doug From prometheanfire at gentoo.org Fri Feb 9 14:25:51 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 9 Feb 2018 08:25:51 -0600 Subject: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers] Help needed for your release In-Reply-To: References: <20180207162357.6vbsj5ty76hvhxiw@gentoo.org> <271f73e0-8dc6-d680-711f-6cf4f1911254@redhat.com> <20180207211822.GJ23143@thor.bakeyournoodle.com> <20180207213102.GK23143@thor.bakeyournoodle.com> <7921c7b2-8841-86bb-abe3-09f64d624208@redhat.com> Message-ID: <20180209142551.fzeyhnf5fpqzglor@gentoo.org> On 18-02-09 17:21:26, Sam P wrote: > Hi Matthew, > > Thanks for the info. > For Masakari, after discussing with release team, all following 3 project > will do independent > release for Queens. > masakari masakari > ​​ > masakari-monitors masakari > python-masakariclient masakari > > Still need 3-4 days to create stable/queens for masakari and > masakari-monitors > and python-masakariclinet can be done late today. > I will create stable/queens as soon as we merged the required patches. > Requirement update will unfreeze soon, and we will hold the requirement > updates till we create stable/queens. > > > --- Regards, > Sampath > > > On Thu, Feb 8, 2018 at 7:33 PM, Dmitry Tantsur wrote: > > > On 02/07/2018 10:31 PM, Tony Breeds wrote: > > > >> On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote: > >> > >> Okay It's safe to ignore then ;P We should probably remove it from > >>> projects.txt if it really is empty I'll propose that. > >>> > >> > >> Oh my bad, ironic-python-agent-builder was included as it's included as > >> an ironic project[1] NOT because it;s listed in projects.txt. Given > >> that it's clearly not for me to remove anything. > >> > >> Having said that if the project hasn't had any updates at all since it's > >> creation in July 2017 perhaps it's no longer needed and could be > >> removed? > >> > > > > We do plan to use it, we just never had time to populate it :( > > > > > >> Yours Tony. > >> > >> [1] http://git.openstack.org/cgit/openstack/governance/tree/refe > >> rence/projects.yaml#n1539 > >> Sounds good, thanks for the update. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jtomasek at redhat.com Fri Feb 9 14:49:22 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Fri, 9 Feb 2018 15:49:22 +0100 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard Message-ID: Hi, all Full support for network configuration is one of the main goals for TripleO UI for Rocky cycle as it is missing part which still requires user to manually prepare templates and provide them to deployment plan. *Step 1. Network Isolation* In Queens cycle we've started working on adding roles and networks management Mistral workflows [1], [2] which allows GUI to provide composable roles and networks features. Roles management workflows are landed, networks management work has most of the patches up for a review. Both roles and networks management is based on a similar concept of having roles/networks directory in deployment plan which consists of roles/networks definitions available to be used for deployment. The list of selected roles/networks which are actually used for deployment as well as it's configuration is stored in roles_data.yaml and network_data.yaml which are then used for populating jinja templates/environments. TripleO-common then provides Mistral workflows for listing available roles/networks, listing currently selected roles/networks, updating roles/networks and selecting roles/networks. This functionality allows us to: Select roles for deployment and configure them Select networks used for deployment and configure them Assign networks to roles Result of this is network-isolation.yaml environment file with correct templates configured in resource_registry and parameters set according to information in networks_data.yaml and roles_data.yaml Work needed to finish: [tripleo-heat-templates] Add networks directory https://review.openstack.org/#/c/520634/ [tripleo-common] Update Networks https://blueprints.launchpad.net/tripleo/+spec/update-networks-action Get Available Networks https://blueprints.launchpad.net/tripleo/+spec/get-networks-action Select Networks (will be pretty much the same as https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow ) [tripleo-ui] , Wireframes [6] Create Network configuration step in deployment plan page Create network configuration wizard view Create dialog to select networks used for deployment Create dialog to configure networks Create dialog to assign networks to roles https://blueprints.launchpad.net/tripleo/+spec/networks-roles-assignment-ui Up to here the direction is pretty well defined. *Step 2. network-environment -> NIC configs* Second step of network configuration is NIC config. For this network-environment.yaml is used which references NIC config templates which define network_config in their resources section. User is currently required to configure these templates manually. We would like to provide interactive view which would allow user to setup these templates using TripleO UI. A good example is a standalone tool created by Ben Nemec [3]. There is currently work aimed for Pike to introduce jinja templating for network environments and templates [4] (single-nic-with-vlans, bond-with-vlans) to support composable networks and roles (integrate data from roles_data.yaml and network_data.yaml) It would be great if we could move this one step further by using these samples as a starting point and let user specify full NIC configuration. Available information at this point: - list of roles and networks as well as which networks need to be configured at which role's NIC Config template - os-net-config schema which defines NIC configuration elements and relationships [5] - jinja templated sample NIC templates Requirements: - provide feedback to the user about networks assigned to role and have not been configured in NIC config yet - let user construct network_config section of NIC config templates for each role (brigdes/bonds/vlans/interfaces...) - provide means to assign network to vlans/interfaces and automatically construct network_config section parameter references - populate parameter definitions in NIC config templates based on role/networks assignment - populate parameter definitions in NIC config templates based on specific elements which use them e.g. BondInterfaceOvsOptions in case when ovs_bond is used - store NIC config templates in deployment plan and reference them from network-environment.yaml Problems to solve: As a biggest problem to solve I see defining logic which would automatically handle assigning parameters to elements in network_config based on Network which user assigns to the element. For example: Using GUI, user is creating network_config for compute role based on network/config/multiple-nics/compute.yaml, user adds an interface and assigns the interface to Tenant network. Resulting template should then automatically populate addresses/ip_netmask: get_param: TenantIpSubnet. Question is whether all this logic should live in GUI or should GUI pass simplified format to Mistral workflow which will convert it to proper network_config format and populates the template with it. I'd really like to hear some ideas or feedback on this so we can figure out how to define a mechanism for configuring NICs. I bet Ben can provide a valuable info since he's implemented similar logic in his tool. [3] [1] https://blueprints.launchpad.net/tripleo/+spec/roles-management [2] https://blueprints.launchpad.net/tripleo/+spec/networks-management [3] https://www.youtube.com/watch?v=k2ZBkkHdeEM [4] https://bugs.launchpad.net/tripleo/+bug/1737041 [5] http://git.openstack.org/cgit/openstack/os-net-config/ tree/os_net_config/schema.yaml [6] https://lizsurette.github.io/OpenStack-Design/tripleo-ui/3-tripleo-ui-edge-cases/7.advancednetworkconfigurationandtopology Thanks -- Jirka -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Feb 9 15:01:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 9 Feb 2018 09:01:43 -0600 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core Message-ID: I'd like to add Takashi to the python-novaclient core team. python-novaclient doesn't get a ton of activity or review, but Takashi has been a solid reviewer and contributor to that project for quite awhile now: http://stackalytics.com/report/contribution/python-novaclient/180 He's always fast to get new changes up for microversion support and help review others that are there to keep moving changes forward. So unless there are objections, I'll plan on adding Takashi to the python-novaclient-core group next week. -- Thanks, Matt From cdent+os at anticdent.org Fri Feb 9 15:12:51 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 9 Feb 2018 15:12:51 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-06 Message-ID: Resource provider 18-06 is here. # Most Important RC1 was cut last night, so we shouldn't be merging any new features now, just bug fixes. Which, of course, means finding and fixing bugs is the thing to do. In the gaps where that's not happening, planning for Rocky is a useful thing to be doing. The PTG is coming up at the end of this month. If you have topics for discussion that are not already on the etherpad, add them: https://etherpad.openstack.org/p/nova-ptg-rocky A variety of specs, and discussions related to such things, are in progress and listed below. If I've forgotten something, let me know, as usual. I wrote a thing describing some of my efforts to break placement: https://anticdent.org/placement-scale-fun.html Placement itself was fine, but I was able to break other stuff. If you have an environment where you are able to do that kind of concrete experimentation, it will help to make the release better. # What's Changed RC1 happened. Some more "sending global request id" changes merged. A release note was created to describe the behavior change in AggregateCoreFilter (and friends): https://review.openstack.org/#/c/541018/ # Help Wanted Testing, Testing, Testing. There are a fair few unstarted bugs related to placement that could do with some attention. Here's a handy URL: https://goo.gl/TgiPXb # Specs * Support traits in Glance https://review.openstack.org/#/c/541507/4 * Add generation support in aggregate assocation https://review.openstack.org/#/c/540447/ * Update ProviderTree https://review.openstack.org/#/c/540111/ * Support aggregate affinity filter/weighers https://review.openstack.org/#/c/529135/ (Note that this is not placement aggregates and is not a placement-oriented solution but is something many of the same people are into.) * Granular Resource Request Syntax (Rocky) https://review.openstack.org/#/c/540179/ * Report CPU features to placement https://review.openstack.org/#/c/497733/ # Main Themes We've not yet identified the new themes, other than to know that Nested remains a big deal. Presumably at the PTG we will define and then narrow the themes. ## Nested Resource Providers Work continues at https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers By which I mean that there's lots of active work and discussion on the patches on this topic. It's the locus of activity. # Other Many of these things are bug fixes or doc tuneups, and thus potentially relevant for Queens. * Update references to OSC in old rp specs https://review.openstack.org/#/c/539038/ * [Placement] Invalid query parameter could lead to HTTP 500 https://review.openstack.org/#/c/539408/ * [placement] use simple FaultWrapper https://review.openstack.org/#/c/533752/ * Ensure resource classes correctly https://review.openstack.org/#/c/539738/ * Avoid inventory DELETE API (no conflict detection) https://review.openstack.org/#/c/539712/ * Fix nits in allocation canidate limit handling https://review.openstack.org/#/c/536784/ * WIP: Move resource provider objects https://review.openstack.org/#/c/540049/ * Do not normalize allocation ratios https://review.openstack.org/#/c/532924/ * Sending global request ids from nova to placement https://review.openstack.org/#/q/topic:bug/1734625 * Update resources once in update available resources https://review.openstack.org/#/c/520024/ (This ought, when it works, to help address some redunancy concerns with nova making too many requests to placement) * Support aggregate affinity filters/weighers https://review.openstack.org/#/q/topic:bp/aggregate-affinity A rocky targeted improvement to affinity handling * Move placement body samples in docs to own dir https://review.openstack.org/#/c/529998/ * Improved functional test coverage for placement https://review.openstack.org/#/q/topic:bp/placement-test-enhancement * Functional tests for traits api https://review.openstack.org/#/c/524094/ * annotate loadapp() (for placement wsgi app) as public https://review.openstack.org/#/c/526691/ * Remove microversion fallback code from report client https://review.openstack.org/#/c/528794/ * WIP: SchedulerReportClient.set_aggregates_for_provider https://review.openstack.org/#/c/532995/ This is for rocky as it depends on changing the api for aggregates handling on the placement side to accept and provide a generation * Add functional test for two-cell scheduler behaviors https://review.openstack.org/#/c/452006/ (This is old and maybe out of date, but something we might like to resurrect) * Make API history doc consistent https://review.openstack.org/#/c/477478/ * WIP: General policy sample file for placement https://review.openstack.org/#/c/524425/ * Support relay RP for allocation candidates https://review.openstack.org/#/c/533437/ Bug fix for sharing with multiple providers * Convert driver supported capabilities to compute node provider traits https://review.openstack.org/#/c/538498/ * Check for leaked allocations in post_test_hook https://review.openstack.org/#/c/538510/ # End Hi. Thanks for making it this far. Go find bugs. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From sambetts at cisco.com Fri Feb 9 15:13:17 2018 From: sambetts at cisco.com (Sam Betts (sambetts)) Date: Fri, 9 Feb 2018 15:13:17 +0000 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: References: Message-ID: <50256F64-AEAB-44B0-A27B-C23437F7C770@cisco.com> +1 On 09/02/2018, 04:36, "Shivanand Tendulker" > wrote: +1 On Wed, Feb 7, 2018 at 11:53 AM, John Villalovos > wrote: +1 On Mon, Feb 5, 2018 at 10:12 AM, Julia Kreger > wrote: I would like to nominate Hironori Shiina to ironic-core. He has been working in the ironic community for some time, and has been helping over the past several cycles with more complex features. He has demonstrated an understanding of Ironic's code base, mechanics, and overall community style. His review statistics are also extremely solid. I personally have a great deal of trust in his reviews. I believe he would make a great addition to our team. Thanks, -Julia __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Feb 9 15:22:50 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Feb 2018 07:22:50 -0800 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core In-Reply-To: <50256F64-AEAB-44B0-A27B-C23437F7C770@cisco.com> References: <50256F64-AEAB-44B0-A27B-C23437F7C770@cisco.com> Message-ID: Since all of our ironic cores have replied and nobody has stated any objections, I guess it is time to welcome Hironori to the team! I will make the changes in gerrit after coffee. Thanks everyone! -Julia On Fri, Feb 9, 2018 at 7:13 AM, Sam Betts (sambetts) wrote: > +1 > > From no-reply at openstack.org Fri Feb 9 15:35:39 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 15:35:39 -0000 Subject: [openstack-dev] [telemetry] ceilometer-powervm 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for ceilometer-powervm for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/ceilometer-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/ceilometer-powervm/log/?h=stable/queens Release notes for ceilometer-powervm can be found at: http://docs.openstack.org/releasenotes/ceilometer-powervm/ From no-reply at openstack.org Fri Feb 9 15:40:58 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 15:40:58 -0000 Subject: [openstack-dev] [keystone] keystone 13.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for keystone for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/keystone/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/queens Release notes for keystone can be found at: http://docs.openstack.org/releasenotes/keystone/ From no-reply at openstack.org Fri Feb 9 15:49:39 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 15:49:39 -0000 Subject: [openstack-dev] [mistral] mistral 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for mistral for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/mistral/log/?h=stable/queens Release notes for mistral can be found at: http://docs.openstack.org/releasenotes/mistral/ From no-reply at openstack.org Fri Feb 9 15:50:28 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 15:50:28 -0000 Subject: [openstack-dev] [mistral] mistral-dashboard 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for mistral-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/mistral-dashboard/log/?h=stable/queens Release notes for mistral-dashboard can be found at: http://docs.openstack.org/releasenotes/mistral-dashboard/ From no-reply at openstack.org Fri Feb 9 15:52:13 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 15:52:13 -0000 Subject: [openstack-dev] [mistral] mistral-extra 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for mistral-extra for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/mistral-extra/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/mistral-extra/log/?h=stable/queens Release notes for mistral-extra can be found at: http://docs.openstack.org/releasenotes/mistral-extra/ From giuseppe.decandia at gmail.com Fri Feb 9 16:00:25 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Fri, 9 Feb 2018 10:00:25 -0600 Subject: [openstack-dev] [infra] Please add me to Tatu's Gerrit groups Message-ID: Hi Infra Team, I'd like to be added to the recently created tatu-core and tatu-release Gerrit groups. Thanks! Pino -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Feb 9 16:02:03 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 9 Feb 2018 17:02:03 +0100 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: +1, no objections so far. On Fri, Feb 9, 2018 at 4:01 PM, Matt Riedemann wrote: > I'd like to add Takashi to the python-novaclient core team. > > python-novaclient doesn't get a ton of activity or review, but Takashi has > been a solid reviewer and contributor to that project for quite awhile now: > > http://stackalytics.com/report/contribution/python-novaclient/180 > > He's always fast to get new changes up for microversion support and help > review others that are there to keep moving changes forward. > > So unless there are objections, I'll plan on adding Takashi to the > python-novaclient-core group next week. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 9 16:03:07 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Feb 2018 10:03:07 -0600 Subject: [openstack-dev] [barbican][heat] Missing RCs Message-ID: <20180209160306.GA13651@sm-xps> Hello teams, Yesterday was the RC1 deadline, and we have not seen a release request for either Barbican or Heat. If there is some blocking reason for waiting on these, please let us know as soon as possible. Otherwise, please submit a release request with branching for stable/queens to the openstack/releases repo. RC's are needed to help test and package, so we need to get something out there relatively soon. If we do not see a release request from these teams, the release team will need to force a release and branch creation by this coming Tuesday, the 13th. We would prefer to not be the ones doing that though. Thanks for your attention. Please let us know if there are any questions or issues. Sean (smcginnis) From sfinucan at redhat.com Fri Feb 9 16:09:15 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 09 Feb 2018 16:09:15 +0000 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: <1518192555.7986.1.camel@redhat.com> On Fri, 2018-02-09 at 09:01 -0600, Matt Riedemann wrote: > I'd like to add Takashi to the python-novaclient core team. > > python-novaclient doesn't get a ton of activity or review, but Takashi > has been a solid reviewer and contributor to that project for quite > awhile now: > > http://stackalytics.com/report/contribution/python-novaclient/180 > > He's always fast to get new changes up for microversion support and help > review others that are there to keep moving changes forward. > > So unless there are objections, I'll plan on adding Takashi to the > python-novaclient-core group next week. Easy +1 from me. Would be good to have him on the team. Stephen From sean.mcginnis at gmx.com Fri Feb 9 16:10:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Feb 2018 10:10:10 -0600 Subject: [openstack-dev] [heat][RelMgmt] heat-translator deliverable type? Message-ID: <20180209161009.GB13651@sm-xps> Hello Heat team, The release team just recently noticed the heat-translator deliverable is marked as a type of "other" and is following the release-model of "cycle-with-intermediary". It appears this is actually a library though. It's hard to tell, but it is either a client lib or non-client lib. In either case, we need to mark it as such and treat its release according to that type. In order to stabilize requirements changes, we have two separate deadlines, first for non-client libs, then a week later for client libs. Since heat-translator appears to be a dependency used by other projects, it will need to be subject to those release deadlines. Can the team clarify what type of library this is, and change the type going forward to accurately reflect that? Please let the release team know if there are any questions. Thanks! Sean (smcginnis) From prometheanfire at gentoo.org Fri Feb 9 16:18:17 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 9 Feb 2018 10:18:17 -0600 Subject: [openstack-dev] [heat][tacker][murano][RelMgmt] heat-translator deliverable type? In-Reply-To: <20180209161009.GB13651@sm-xps> References: <20180209161009.GB13651@sm-xps> Message-ID: <20180209161817.ubwkbw5bezsc5ej2@gentoo.org> On 18-02-09 10:10:10, Sean McGinnis wrote: > Hello Heat team, > > The release team just recently noticed the heat-translator deliverable is > marked as a type of "other" and is following the release-model of > "cycle-with-intermediary". > > It appears this is actually a library though. It's hard to tell, but it is > either a client lib or non-client lib. In either case, we need to mark it as > such and treat its release according to that type. > > In order to stabilize requirements changes, we have two separate deadlines, > first for non-client libs, then a week later for client libs. Since > heat-translator appears to be a dependency used by other projects, it will need > to be subject to those release deadlines. > > Can the team clarify what type of library this is, and change the type going > forward to accurately reflect that? > > Please let the release team know if there are any questions. > I think tacker and murano may be using it in a non-intended way (those are the only projects I could find that's using it). https://github.com/openstack/tacker/search?utf8=%E2%9C%93&q=translator&type= https://github.com/openstack/murano/search?utf8=%E2%9C%93&q=translator&type= -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From thierry at openstack.org Fri Feb 9 16:27:21 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 9 Feb 2018 17:27:21 +0100 Subject: [openstack-dev] [reno] an alternative approach to known issues In-Reply-To: <1518186260-sup-4136@lrrr.local> References: <20180208224356.os3z5qqqcvo53xtp@localhost> <1518186260-sup-4136@lrrr.local> Message-ID: <4e4eace4-9f7f-e6e3-d3b8-796f2414f7fb@openstack.org> Doug Hellmann wrote: > What makes reno a good fit for this task? It seems like updating a > regular documentation page in the source tree would work just as well, > since presumably these technical debt descriptions don't need to be > backported to stable branches. Yeah it feels like reno would add complexity for little benefit in that process... Better track debt in a TODO document, or a proper task tracker ? -- Thierry Carrez (ttx) From bob.haddleton at nokia.com Fri Feb 9 16:38:25 2018 From: bob.haddleton at nokia.com (HADDLETON, Robert W (Bob)) Date: Fri, 9 Feb 2018 10:38:25 -0600 Subject: [openstack-dev] [heat][tacker][murano][RelMgmt] heat-translator deliverable type? In-Reply-To: <20180209161817.ubwkbw5bezsc5ej2@gentoo.org> References: <20180209161009.GB13651@sm-xps> <20180209161817.ubwkbw5bezsc5ej2@gentoo.org> Message-ID: On 2/9/2018 10:18 AM, Matthew Thode wrote: > On 18-02-09 10:10:10, Sean McGinnis wrote: >> Hello Heat team, >> >> The release team just recently noticed the heat-translator deliverable is >> marked as a type of "other" and is following the release-model of >> "cycle-with-intermediary". >> >> It appears this is actually a library though. It's hard to tell, but it is >> either a client lib or non-client lib. In either case, we need to mark it as >> such and treat its release according to that type. >> >> In order to stabilize requirements changes, we have two separate deadlines, >> first for non-client libs, then a week later for client libs. Since >> heat-translator appears to be a dependency used by other projects, it will need >> to be subject to those release deadlines. >> >> Can the team clarify what type of library this is, and change the type going >> forward to accurately reflect that? >> >> Please let the release team know if there are any questions. >> > I think tacker and murano may be using it in a non-intended way (those > are the only projects I could find that's using it). > > https://github.com/openstack/tacker/search?utf8=%E2%9C%93&q=translator&type= > https://github.com/openstack/murano/search?utf8=%E2%9C%93&q=translator&type= Tacker worked with us on their usage so it's a known scenario.  I wasn't aware that Murano was using h-t, but if the implementation is similar to Tacker's it should be fine. spzala knows more of the history than I do, but heat-translator, like tosca-parser, was originally a stand-alone client that has evolved into a library that can be used by other projects.  Both projects are now used as libraries by Tacker, and possibly others, in addition to having users of the command-line clients. When we moved into the release model it was suggested that we use type "other", but I'm happy to change to whatever the appropriate release type is.  Both projects are fairly low volume at the moment, so we don't need a lot of releases. Thanks Bob Haddleton -------------- next part -------------- A non-text attachment was scrubbed... Name: bob_haddleton.vcf Type: text/x-vcard Size: 252 bytes Desc: not available URL: From ken1ohmichi at gmail.com Fri Feb 9 16:44:43 2018 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Fri, 09 Feb 2018 16:44:43 +0000 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: <1518192555.7986.1.camel@redhat.com> References: <1518192555.7986.1.camel@redhat.com> Message-ID: +1 2018年2月9日(金) 8:09 Stephen Finucane : > On Fri, 2018-02-09 at 09:01 -0600, Matt Riedemann wrote: > > I'd like to add Takashi to the python-novaclient core team. > > > > python-novaclient doesn't get a ton of activity or review, but Takashi > > has been a solid reviewer and contributor to that project for quite > > awhile now: > > > > http://stackalytics.com/report/contribution/python-novaclient/180 > > > > He's always fast to get new changes up for microversion support and help > > review others that are there to keep moving changes forward. > > > > So unless there are objections, I'll plan on adding Takashi to the > > python-novaclient-core group next week. > > Easy +1 from me. Would be good to have him on the team. > > Stephen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Fri Feb 9 16:52:46 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 9 Feb 2018 17:52:46 +0100 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: <1518195166.9865.1@smtp.office365.com> On Fri, Feb 9, 2018 at 4:01 PM, Matt Riedemann wrote: > I'd like to add Takashi to the python-novaclient core team. > > python-novaclient doesn't get a ton of activity or review, but > Takashi has been a solid reviewer and contributor to that project for > quite awhile now: > > http://stackalytics.com/report/contribution/python-novaclient/180 > > He's always fast to get new changes up for microversion support and > help review others that are there to keep moving changes forward. > > So unless there are objections, I'll plan on adding Takashi to the > python-novaclient-core group next week. +1 Cheers, gibi > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From no-reply at openstack.org Fri Feb 9 16:54:09 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 16:54:09 -0000 Subject: [openstack-dev] [neutron] networking-bagpipe 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-bagpipe for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-bagpipe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-bagpipe/log/?h=stable/queens Release notes for networking-bagpipe can be found at: http://docs.openstack.org/releasenotes/networking-bagpipe/ If you find an issue that could be considered release-critical, please file it at: http://bugs.launchpad.net/networking-bagpipe and tag it *queens-rc-potential* to bring it to the networking-bagpipe release crew's attention. From no-reply at openstack.org Fri Feb 9 17:08:07 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:08:07 -0000 Subject: [openstack-dev] [neutron] networking-bgpvpn 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-bgpvpn for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-bgpvpn/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-bgpvpn/log/?h=stable/queens Release notes for networking-bgpvpn can be found at: http://docs.openstack.org/releasenotes/networking-bgpvpn/ If you find an issue that could be considered release-critical, please file it at: http://bugs.launchpad.net/bgpvpn and tag it *queens-rc-potential* to bring it to the networking-bgpvpn release crew's attention. From no-reply at openstack.org Fri Feb 9 17:08:37 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:08:37 -0000 Subject: [openstack-dev] [neutron] networking-odl 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-odl for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-odl/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-odl/log/?h=stable/queens Release notes for networking-odl can be found at: http://docs.openstack.org/releasenotes/networking-odl/ From no-reply at openstack.org Fri Feb 9 17:08:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:08:47 -0000 Subject: [openstack-dev] [neutron] neutron-dynamic-routing 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for neutron-dynamic-routing for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-dynamic-routing/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron-dynamic-routing/log/?h=stable/queens Release notes for neutron-dynamic-routing can be found at: http://docs.openstack.org/releasenotes/neutron-dynamic-routing/ From no-reply at openstack.org Fri Feb 9 17:13:13 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:13:13 -0000 Subject: [openstack-dev] [neutron] networking-sfc 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-sfc for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-sfc/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-sfc/log/?h=stable/queens Release notes for networking-sfc can be found at: http://docs.openstack.org/releasenotes/networking-sfc/ If you find an issue that could be considered release-critical, please file it at: http://bugs.launchpad.net/networking-sfc and tag it *queens-rc-potential* to bring it to the networking-sfc release crew's attention. From no-reply at openstack.org Fri Feb 9 17:13:21 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:13:21 -0000 Subject: [openstack-dev] [neutron] neutron-fwaas 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for neutron-fwaas for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-fwaas/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron-fwaas/log/?h=stable/queens Release notes for neutron-fwaas can be found at: http://docs.openstack.org/releasenotes/neutron-fwaas/ From no-reply at openstack.org Fri Feb 9 17:13:36 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:13:36 -0000 Subject: [openstack-dev] [neutron] networking-ovn 4.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-ovn for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-ovn/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-ovn/log/?h=stable/queens Release notes for networking-ovn can be found at: http://docs.openstack.org/releasenotes/networking-ovn/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/networking-ovn and tag it *queens-rc-potential* to bring it to the networking-ovn release crew's attention. From no-reply at openstack.org Fri Feb 9 17:14:01 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:14:01 -0000 Subject: [openstack-dev] [neutron] networking-midonet 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for networking-midonet for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-midonet/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-midonet/log/?h=stable/queens Release notes for networking-midonet can be found at: http://docs.openstack.org/releasenotes/networking-midonet/ From no-reply at openstack.org Fri Feb 9 17:17:26 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 09 Feb 2018 17:17:26 -0000 Subject: [openstack-dev] [neutron] neutron 12.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for neutron for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/queens Release notes for neutron can be found at: http://docs.openstack.org/releasenotes/neutron/ From ltoscano at redhat.com Fri Feb 9 18:14:17 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 09 Feb 2018 19:14:17 +0100 Subject: [openstack-dev] [sahara] FFE - Adding Ambari 2.4.2.0 to image gen Message-ID: <89309518.qFsBkxmSyk@whitebase.usersys.redhat.com> Hi, I'd like to request a feature exception for https://review.openstack.org/#/c/529442/ I finally managed to test it, and the generated image with Ambari 2.4.2.0 can spawn clusters with both HDP 2.4 and HDP 2.3. There are some issues when Hive is involved, but I think that they are not regressions. The feature (which it's pretty trivial in itself, but required a fair amount of testing) is pretty much isolated and it won't cause regressions on other components anyway. Ciao -- Luigi From tenobreg at redhat.com Fri Feb 9 18:18:42 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Fri, 09 Feb 2018 18:18:42 +0000 Subject: [openstack-dev] [sahara] FFE - Adding Ambari 2.4.2.0 to image gen In-Reply-To: <89309518.qFsBkxmSyk@whitebase.usersys.redhat.com> References: <89309518.qFsBkxmSyk@whitebase.usersys.redhat.com> Message-ID: Taking that the risk to the project is none the FFE exception is granted. On Fri, Feb 9, 2018 at 3:15 PM Luigi Toscano wrote: > Hi, > I'd like to request a feature exception for > https://review.openstack.org/#/c/529442/ > > I finally managed to test it, and the generated image with Ambari 2.4.2.0 > can > spawn clusters with both HDP 2.4 and HDP 2.3. There are some issues when > Hive > is involved, but I think that they are not regressions. > > The feature (which it's pretty trivial in itself, but required a fair > amount > of testing) is pretty much isolated and it won't cause regressions on other > components anyway. > > Ciao > -- > Luigi > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Feb 9 18:22:30 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 9 Feb 2018 13:22:30 -0500 Subject: [openstack-dev] [tripleo] Unbranched repositories and testing In-Reply-To: References: <4165b44a-820b-d025-673a-d3c37d1b6eb1@redhat.com> Message-ID: On Thu, Feb 8, 2018 at 6:23 PM, Alex Schultz wrote: > On Tue, Oct 10, 2017 at 2:24 PM, Emilien Macchi > wrote: > > On Fri, Oct 6, 2017 at 5:09 AM, Jiří Stránský wrote: > >> On 5.10.2017 22:40, Alex Schultz wrote: > >>> > >>> Hey folks, > >>> > >>> So I wandered across the policy spec[0] for how we should be handling > >>> unbranched repository reviews and I would like to start a broader > >>> discussion around this topic. We've seen it several times over the > >>> recent history where a change in oooqe or tripleo-ci ends up affecting > >>> either a stable branch or an additional set of jobs that were not run > >>> on the change. I think it's unrealistic to run every possible job > >>> combination on every submission and it's also a giant waste of CI > >>> resources. I also don't necessarily agree that we should be using > >>> depends-on to prove things are fine for a given patch for the same > >>> reasons. That being said, we do need to minimize our risk for patches > >>> to these repositories. > >>> > >>> At the PTG retrospective I mentioned component design structure[1] as > >>> something we need to be more aware of. I think this particular topic > >>> is one of those types of things where we could benefit from evaluating > >>> the structure and policy around these unbranched repositories to see > >>> if we can improve it. Is there a particular reason why we continue to > >>> try and support deployment of (at least) 3 or 4 different versions > >>> within a single repository? Are we adding new features that really > >>> shouldn't be consumed by these older versions such that perhaps it > >>> makes sense to actually create stable branches? Perhaps there are > >>> some other ideas that might work? > >> > >> > >> Other folks probably have a better view of the full context here, but > i'll > >> chime in with my 2 cents anyway.. > >> > >> I think using stable branches for tripleo-quickstart-extras could be > worth > >> it. The content there is quite tightly coupled with the expected TripleO > >> end-user workflows, which tend to evolve considerably between releases. > >> Branching extras might be a good way to "match the reality" in that > sense, > >> and stop worrying about breaking older workflows. (Just recently it > came up > >> that the upgrade workflow in O is slightly updated to make it work in > P, and > >> will change quite a bit for Q. Minor updates also changed between O and > P.) > >> > >> I'd say that tripleo-quickstart is a different story though. It seems > fairly > >> release-agnostic in its focus. We may want to keep it unbranched (?). > That > >> probably applies even more for tripleo-ci, where ability to make changes > >> which affect how TripleO does CIing in general, across releases, is IMO > a > >> significant feature. > >> > >> Maybe branching quickstart-extras might require some code reshuffling > >> between what belongs there and what belongs into quickstart itself. > > > > I agree a lot with Jirka and I think branching oooq-extras would be a > > good first start to see how it goes. > > If we find it helpful and working correctly, we could go the next > > steps and see if there is any other repo that could be branched > > (tripleo-ci or oooq) but I guess for now the best candidate is > > oooq-extras. > > > > I'm resurrecting this thread as we seemed to have done it again[0] > with a change oooq-extras master breaking stable/pike. So I would > propose that we start investigating branching oooq-extras. Does > anyone see any blocking issues with starting to branch this > repository? > > Thanks, > -Alex > > [0] https://bugs.launchpad.net/tripleo/+bug/1748315 Thanks Alex, TripleO-CI please be prepared to discuss this thread in the next scrum meeting. > > > > >> (Just my 2 cents, i'm likely not among the most important stakeholders > in > >> this...) > >> > >> Jirka > >> > >> > >>> > >>> Thanks, > >>> -Alex > >>> > >>> [0] https://review.openstack.org/#/c/478488/ > >>> [1] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > Emilien Macchi > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Fri Feb 9 18:47:52 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 9 Feb 2018 12:47:52 -0600 Subject: [openstack-dev] [reno][tripleo] an alternative approach to known issues In-Reply-To: <20180209014214.43ik5scukozb6hig@localhost> References: <20180208224356.os3z5qqqcvo53xtp@localhost> <7462d4c0-5c42-7586-4a0c-a18cfd63669b@nemebean.com> <20180209014214.43ik5scukozb6hig@localhost> Message-ID: <2922ebeb-7027-2fde-e007-736e40bcdb7e@nemebean.com> On 02/08/2018 07:42 PM, Gabriele Cerami wrote: > On 08 Feb, Ben Nemec wrote: >> So TripleO has a tech debt policy: https://specs.openstack.org/openstack/tripleo-specs/specs/policy/tech-debt-tracking.html >> (and I'm tagging tripleo on this thread for visibility). > > I didn't know about this policy. I've been circling around tech debts > for more than a month now, and nobody pointed me to it either. > > Anyway, I find it insufficient. Not specifically the tracking method, > but more the guidelines and the example, to understand how to use it > correctly. > > Doing some basic research, I see that in tripleo 31 bugs were marked > with tech-debt tag. 15 Were closed, but they were also marked as > CRITICAL. This does not match my definition of tech-debt. I would tend to agree. Tech debt is something you can live with for a period of time, and critical bugs are not. The critical tech debt bug open in tripleo at the time I'm writing this is clearly not critical since it's been open for months and nothing has happened with it, nor has it been blocking anyone from deploying or developing TripleO. > Of the remaining 16 sometimes it's hard to understand which part is the > technical debt, some are really new features requests matching more the > feeling "we may have needed to think about this months ago during the > design", for some it's just "we don't have a clear idea of what to do" > and the rest is "here's a bandaid, we'll think about it later" > > The policy lacks a definition of what is a technical debt. I understand > the issue as it's really difficult to find a unique definition that fits > all we want to include. > Whatever the definition we want it to be, there are at least three things > that I want to see in tech debt bug (or report), and they all try to > focus on the "debt" part of the whole "tech debt" concept. > > - What's the cost of the repayment > - What's the cost of the interests > - What's the frequency of the interests > > For me a technical debt is an imperfect implementation that has > consequences. Describable and maybe measurable consequences. > "I'm using list in this case for simplicity but if we add more items, we > may need a more efficient structure, because it will become too slow" > The cost of the repayment is the time spent to replace the structure and > its methods with something more complex > The cost of the interests is the speed lost when the list increases > The frequency of the interests is "this list will become very big every > three hours" > > Without these three elements it becomes hard to understand if we want to > really repay the debt, and how we prioritize the repayments. > > Since a tech debt is something that I find really related to the code > (Which piece or line of code is the one that has these measurale > consequences) I'd really like for the report to be as close as possible > to the code. > Also sometimes it may just become a design choice based on assumptions. > "I know the list is not efficient, but we'll rarely get it big often, > and we are sure to clear it out almost immediately" > > We can maybe discuss further the advantages of the existing bug tracking > for the handling of these reports. Absolutely. Policies are not set in stone for all time. They're living documents that can be updated as we find limitations or areas for improvement. Please feel free to propose any updates you think would be helpful to the existing policy. We can hash out the details in Gerrit. > >> I'm not sure I agree. Bugs stay open until they are fixed/won't fixed. Tech >> debt stays open until it is fixed/won't fixed. We've had bugs open for >> years for things that are tricky to fix. Arguably those are tech debt too, >> but in any case I'm not aware of any problems with using the bug tracker to >> manage them. > > Remember the "debt" in "technical debt". You're not reporting it > correctly if you don't measure the consequences. I don't think the > report should really be about the problem or the solution, because then > you're really only talking about the full repayment. > Of course without any description on the consequences, the tech debt may > be equated to a bug, you really have a problem and you want to discuss > only its solution. > > Another difference is that the importance of a bug rarely changes over > time, once correctly triaged. > > With the technical debt instead > - A won't fix doesn't mean that the interests are gone. You closed the > bug/tech debt and you are not counting the interests anymore. > Convenient and deceiving. There is no status currently that could put > the bug on hold. Removing it from all the short term consideration, > but make it still count for its interests, make it possible to > consider and reevaluate at any time. I don't think any bug should be closed as long as we have some interest in fixing it. If it's not high priority then it should be triaged as such, but I wouldn't advocate closing a bug just because we won't have time to get to it this cycle/year/decade. :-) The milestone field might be a good way to indicate that a bug is for future reference but probably won't be dealt with in the short term. I know I've seen projects that have a generic "future" milestone that could be used to indicate we don't know when we'll get to it, but still want to at some point. > - A tech debt really can get more and more costly to repay. If someone > else implement something over you "imperfect" code, the cost of the > repayment just doubled, because you have to fix a stack of code now. > Marking the code with a # TD may warn someone "be aware that someone > is trying to build over a problem" I think we are in agreement that there needs to be some sort of notation in the code to let people know that a given section is tech debt. > - The frequency of interests may increase also over time, and the > importance may raise as we are paying too much interests, and may be > better to start considering full repayment. Sure, but I don't think you can solve this with a tracking system. It's basically a question of re-triaging old bugs/tech debt regularly, and given how much trouble we have keeping up with doing that for new bugs I don't love our chances of doing it for old stuff. I also don't think this is unique to tech debt. Bugs have their priority changed all the time as people discover that they have more serious consequences than initially thought or we find out that a lot of people are running into a bug. > - One of the solution to a technical debt is "conversion": you just > render the imperfect solution just less imperfect, that is you don't > fully repay it, you repay just a little to lower the interests cost or > frequency. It's not a workaround, it's not a fix, you're just reducing > its impact. How do you report that in a bug tracking system ? Partial-Bug: 1234567 Even better, since the commit message for such a change should include a good explanation of how it is only a partial fix for the problem, you shouldn't even need to explicitly leave a comment on the bug. The gerrit bot will include the commit message as a comment when it merges. From fungi at yuggoth.org Fri Feb 9 19:21:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Feb 2018 19:21:27 +0000 Subject: [openstack-dev] [infra] Please add me to Tatu's Gerrit groups In-Reply-To: References: Message-ID: <20180209192126.6hzzwqha36eda64l@yuggoth.org> On 2018-02-09 10:00:25 -0600 (-0600), Pino de Candia wrote: > I'd like to be added to the recently created tatu-core and > tatu-release Gerrit groups. Since your Gerrit account is the one which proposed the change to add the project whose ACLs use those groups, I have added you as the initial member in both of them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dmsimard at redhat.com Fri Feb 9 20:07:53 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 9 Feb 2018 15:07:53 -0500 Subject: [openstack-dev] [tripleo] Updating default Docker registry and namespace Message-ID: Hi, I've submitted a series of patches: https://review.openstack.org/#/q/topic:default-registry In these patches, I am merely doing the following: #1 s/trunk.registry.rdoproject.org/docker.io/ trunk.registry.rdoproject.org is not meant for production or stable use, it should only be used as a staging ground so we don't spam docker.io with hundreds of images needlessly. We're pushing and tagging tested and promoted images to docker.io -- trunk.registry.rdoproject.org should not be advertised. #2 s/latest/current-tripleo/ We don't use the "latest" tag, we push tags based on their trunk repository hash (DLRN) or names such as "current-tripleo", "current-tripleo-rdo", etc. #3 s/tripleoupstream/tripleomaster/ The docker.io/tripleoupstream namespace is unmaintained. Images are now pushed and tagged in the tripleomaster namespace. Patches for these should be backported to Pike while replacing "tripleomaster" for "tripleopike" once they have landed. Please validate the patches properly as I can't pretend to have tested these before sending them. Thanks, David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From emilien at redhat.com Fri Feb 9 22:02:58 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 9 Feb 2018 14:02:58 -0800 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud Message-ID: Quite a lot of progress has been made over the last months (and days), so I found useful to share an update on where we are with the efforts on containerized undercloud. ## CI efforts - tripleo-ci-centos-7-undercloud-containers job has been reworked to use the "undercloud install" interface. Job is green and will probably start voting during the next days. See https://review.openstack.org/#/c/517445/ and https://review.openstack.org/#/c/517444/. - we're looking at switching some other CI jobs (maybe one to start) to deploy a containerized undercloud, and then deploy an overcloud (probably featureset010). We have a few blockers but we're working on it. It's mostly the overcloud_prep that fails: http://logs.openstack.org/06/542906/1/check/tripleo-ci-centos-7-containers-multinode/0b18c49/logs/undercloud/home/zuul/overcloud_prep_containers.log.txt.gz#_2018-02-09_17_07_27 - because of that effort, we're also taking an opportunity to refactor tripleo-quickstart-extras roles to be more "standard" for both undercloud and overcloud (example, renaming overcloud-prep-containers to prep-containers, etc). We'll need help from TripleO CI squad probably. - Chandan will help us to run validate-tempest in tripleo-ci-centos-7-undercloud-containers so we can have some actual testing for this job, since no overcloud is deployed. ## Feature parity - TLS work is ongoing. - TripleO UI containerization is ongoing. - Nova join support is targeted for Rocky - Upgrade workflow is under investigation. We'll work on re-using the upgrade_tasks in THT to upgrade a non-containerized undercloud (Queens) to a containerized undercloud (Rocky) like we did between Ocata and Pike with the upgrade_tasks. We'll actually re-use the same code but will have to change the undercloud upgrade workflow in tripleoclient. That highlights the current efforts, if you have any question, need more specific or just any feedback, please go ahead. At the PTG, we'll discuss about some technical details and hope to move forward with this nice feature during Rocky cycle. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Feb 9 22:09:39 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 9 Feb 2018 14:09:39 -0800 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 2:02 PM, Emilien Macchi wrote: > Quite a lot of progress has been made over the last months (and days), so > I found useful to share an update on where we are with the efforts on > containerized undercloud. > > ## CI efforts > > - tripleo-ci-centos-7-undercloud-containers job has been reworked to use > the "undercloud install" interface. Job is green and will probably start > voting during the next days. See https://review.openstack.org/#/c/517445/ > and https://review.openstack.org/#/c/517444/. > - we're looking at switching some other CI jobs (maybe one to start) to > deploy a containerized undercloud, and then deploy an overcloud (probably > featureset010). We have a few blockers but we're working on it. It's mostly > the overcloud_prep that fails: > http://logs.openstack.org/06/542906/1/check/tripleo-ci- > centos-7-containers-multinode/0b18c49/logs/undercloud/home/ > zuul/overcloud_prep_containers.log.txt.gz#_2018-02-09_17_07_27 > - because of that effort, we're also taking an opportunity to refactor > tripleo-quickstart-extras roles to be more "standard" for both undercloud > and overcloud (example, renaming overcloud-prep-containers to > prep-containers, etc). We'll need help from TripleO CI squad probably. > - Chandan will help us to run validate-tempest in tripleo-ci-centos-7-undercloud-containers > so we can have some actual testing for this job, since no overcloud is > deployed. > > ## Feature parity > > - TLS work is ongoing. > - TripleO UI containerization is ongoing. > - Nova join support is targeted for Rocky > - Upgrade workflow is under investigation. We'll work on re-using the > upgrade_tasks in THT to upgrade a non-containerized undercloud (Queens) to > a containerized undercloud (Rocky) like we did between Ocata and Pike with > the upgrade_tasks. We'll actually re-use the same code but will have to > change the undercloud upgrade workflow in tripleoclient. > > That highlights the current efforts, if you have any question, need more > specific or just any feedback, please go ahead. > At the PTG, we'll discuss about some technical details and hope to move > forward with this nice feature during Rocky cycle. > > Thanks, > -- > Emilien Macchi > I forgot to mention, but people working on this topic have been using Trello to collaborate: https://trello.com/b/nmGSNPoQ/containerized-undercloud To keep things in the open, here's the link and anyone is free to participate. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri Feb 9 22:30:12 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 9 Feb 2018 17:30:12 -0500 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 5:02 PM, Emilien Macchi wrote: > Quite a lot of progress has been made over the last months (and days), so I > found useful to share an update on where we are with the efforts on > containerized undercloud. > > ## CI efforts > > - tripleo-ci-centos-7-undercloud-containers job has been reworked to use the > "undercloud install" interface. Job is green and will probably start voting > during the next days. See https://review.openstack.org/#/c/517445/ and > https://review.openstack.org/#/c/517444/. > - we're looking at switching some other CI jobs (maybe one to start) to > deploy a containerized undercloud, and then deploy an overcloud (probably > featureset010). We have a few blockers but we're working on it. It's mostly > the overcloud_prep that fails: > http://logs.openstack.org/06/542906/1/check/tripleo-ci-centos-7-containers-multinode/0b18c49/logs/undercloud/home/zuul/overcloud_prep_containers.log.txt.gz#_2018-02-09_17_07_27 > - because of that effort, we're also taking an opportunity to refactor > tripleo-quickstart-extras roles to be more "standard" for both undercloud > and overcloud (example, renaming overcloud-prep-containers to > prep-containers, etc). We'll need help from TripleO CI squad probably. > - Chandan will help us to run validate-tempest in > tripleo-ci-centos-7-undercloud-containers so we can have some actual testing > for this job, since no overcloud is deployed. > > ## Feature parity > > - TLS work is ongoing. > - TripleO UI containerization is ongoing. > - Nova join support is targeted for Rocky > - Upgrade workflow is under investigation. We'll work on re-using the > upgrade_tasks in THT to upgrade a non-containerized undercloud (Queens) to a > containerized undercloud (Rocky) like we did between Ocata and Pike with the > upgrade_tasks. We'll actually re-use the same code but will have to change > the undercloud upgrade workflow in tripleoclient. You may want to add an item for the routed ctlplane work that landed at the end of Queens. Afaik, that will need to be supported with the containerized undercloud. -- -- James Slagle -- From emilien at redhat.com Fri Feb 9 22:39:08 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 9 Feb 2018 14:39:08 -0800 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 2:30 PM, James Slagle wrote: [...] You may want to add an item for the routed ctlplane work that landed > at the end of Queens. Afaik, that will need to be supported with the > containerized undercloud. > Done: https://trello.com/c/kFtIkto1/17-routed-ctlplane-networking Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Feb 9 22:51:42 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 9 Feb 2018 23:51:42 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Weeks of 29 January and 5 February 2018 Message-ID: # Keystone Team Update - Weeks of 29 January and 5 February 2018 It's been a busy couple of weeks and I missed the last update, here's an update for the last two weeks. ## News ### RC1 RC1 was cut today[1]. We expect to release an RC2 after branching since we have a translations patch and a couple of bugfixes that we hope to get in. ### PTG Planning We're finalizing topics to cover during the cross-project days of the PTG[2]. [1] https://review.openstack.org/#/c/542385/ [2] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg ## Open Specs Search query: https://goo.gl/pc8cCf We don't have any new specs proposed currently but Adrian has hinted that an interesting one is on its way[3]. [3] http://lists.openstack.org/pipermail/openstack-operators/2018-February/014852.html ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 33 changes this last week, 76 since the last update newsletter. These included the last of our api-ref reorganization changes, cleanup of v2 cruft and documentation, and some major bugfixes. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 25 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Please focus reviews on release-critical bugs. ## Milestone Outlook https://releases.openstack.org/queens/schedule.html We've released our RC1 and we are in hard string freeze. We have two more weeks to make another RC release. ## Shout-outs Thanks to our Outreachy intern Suramya for completing our api-ref reorganization! This was a big step in making our API reference more useable. Of course we have many more things for you to help us with ;) ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From mike at openstack.org Fri Feb 9 23:29:49 2018 From: mike at openstack.org (Mike Perez) Date: Sat, 10 Feb 2018 10:29:49 +1100 Subject: [openstack-dev] Developer Mailing List Digest February 3-9th Message-ID: <20180209232949.GE14568@openstack.org> Please help shape the future of the Developer Mailing List Digest with this two question survey: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs HTML version: https://www.openstack.org/blog/?p=8287 Success Bot Says * stephenfin on #openstack-nova [0]: After 3 years and 7 (?) releases, encryption between nova's consoleproxy service and compute nodes is finally * possible ✌️ * AJaeger on #openstack-infra [1]: zuul and nodepool feature/zuulv3 branches have merged into master * ildikov on #openstack-nova [2]: OpenStack now supports to attach a Cinder volume to multiple VM instances managed by Nova. * mriedem on #openstack-nova [3]: osc-placement 1.0.0 released; you can now do things with resource providers/classes via OSC CLI now. * AJaeger on #openstack-infra [4]: All tox jobs have been converted to Zuul v3 native syntax, run-tox.sh is gone. * ttx on #openstack-dev [5]: All teams have at least one candidate for PTL for the Rocky cycle! Might be the first time. * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-15.log.html [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-18.log.html [2] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-23.log.html [3] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-24.log.html [4] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-07.log.html [5] - http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-08.log.html Community Summaries =================== * Release countdown [0] * Nova placement resource provider update [1] * TC Report [2] * POST /api-sig/news [3] * Technical Committee Status Update [4] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127120.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127203.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127012.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127140.html [4] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127192.html Dublin PTG Schedule is Up ========================= PTG schedule is available [0]. A lot of rooms are available Monday/Tuesday to discuss additional topics that take half a day and can be requested [1]. For small things (90 min discussions) we can book them dyncamically during the event with the new PTG bot features. Follow the thread for updates to the schedule [2]. [0] - https://www.openstack.org/ptg#tab_schedule [1] - https://etherpad.openstack.org/p/PTG-Dublin-missing-topics [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#126892 Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#126892 Last Chance for PTG Dublin Tickets ================================== PTG tickets for Dublin were sold out this week, and the Foundation received many requests for more tickets. Working with the venue to accommodate the extra capacity, every additional attendee incrementally increases costs to $600. It's understood the importance of this event and the need to have key team members present, so the OpenStack Foundation has negotiated an additional 100 tickets and will partially subsidize to be at sold at $400 [0]. [0] - https://www.eventbrite.com/e/project-teams-gathering-dublin-2018-tickets-39055825024 Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127129.html New Zuul Depends-On Syntax ======================== Recently introduced url-based syntax for Depends-On: footer in your commit message: Depends-On: https://review.openstack.org/535851 Old syntax will continue to work for a while, but please begin using the new syntax. Zuul has grown the ability to talk to multiple backend systems (Gerrit, Git and plain Git so far). From a change in gerrit you could have: Depends-On: https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 Or from a Github pull request: Depends-On: https://review.openstack.org/536159 Tips and certain cases contained further in the full message. Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126535.html Call For Mentors and Funding ============================ The Outreachy program [0] helps people of underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream community. OpenStack will be participating in Outreachy May 2018 to August 2018. Application period opens on February 12th. Interested mentors should publish their project ideas [1]. You can read more information about being a mentor [2]. Interested sponsors [3] can help provide a stipend to interns for a three month program. [0] - https://wiki.openstack.org/wiki/Outreachy [1] - https://www.outreachy.org/communities/cfp/openstack/submit-project/ [2] - https://wiki.openstack.org/wiki/Outreachy/Mentors [3] - https://www.outreachy.org/sponsor/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127009.html Community Goals for Rocky ========================= TC voted by not approved yet: * Remove mox [0] * Toggle the debug option at runtime [1] Comment now on the two selected goals, or the TC will approve them and they'll be discussed at the PTG. [0] - https://review.openstack.org/#/c/532361/ [1] - https://review.openstack.org/#/c/534605/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127017.html End of PTL Nominations ====================== Official candidate list available [0]. There are 0 projects without candidates, so the TC will not have to appoint an PTL's. Three projects will have elections: Kolla, QA and Mistral. [0] - http://governance.openstack.org/election/#Rocky-ptl-candidates Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127098.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From giuseppe.decandia at gmail.com Fri Feb 9 23:51:46 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Fri, 9 Feb 2018 17:51:46 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Folks, here are the slides for the Tatu presentation: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM I meant to record the demo video as well but I haven't gotten around to editing all the bits. Please stay tuned. thanks, Pino On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < giuseppe.decandia at gmail.com> wrote: > Hi Luke, > > Fantastic! An hour would be great if the schedule allows - there are lots > of different aspects we can dive into and potential future directions the > project can take. > > thanks! > Pino > > > > On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: > >> >> >> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> Hi Folks, >>> >>> I know the request is very late, but I wasn't aware of this SIG until >>> recently. Would it be possible to present a new project to the Security SIG >>> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >>> project, sign on users and contributors and get feedback. >>> >>> For the past few months I have been working on a new project - Tatu [1]- >>> to automate the management of SSH certificates (for both users and hosts) >>> in OpenStack. Tatu allows users to generate SSH certificates with >>> principals based on their Project role assignments, and VMs automatically >>> set up their SSH host certificate (and related config) via Nova vendor >>> data. The project also manages bastions and DNS entries so that users don't >>> have to assign Floating IPs for SSH nor remember IP addresses. >>> >>> I have a working demo (including Horizon panels [2] and OpenStack CLI >>> [3]), but am still working on the devstack script and patches [4] to get >>> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >>> demo video in the next few days. >>> >>> best regards, >>> Pino >>> >>> >>> References: >>> >>> 1. https://github.com/pinodeca/tatu (Please note this is still very >>> much a work in progress, lots of TODOs in the code, very little testing and >>> documentation doesn't reflect the latest design). >>> 2. https://github.com/pinodeca/tatu-dashboard >>> 3. https://github.com/pinodeca/python-tatuclient >>> 4. https://review.openstack.org/#/q/tatu >>> >>> >>> >>> >> Hi Giuseppe, of course you can! I will add you to the agenda. We could >> get your an hour if it allows more time for presenting and post discussion? >> >> We will be meeting in an allocated room on Monday (details to follow). >> >> https://etherpad.openstack.org/p/security-ptg-rocky >> >> Luke >> >> >> >> >>> >>> >>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: >>> >>>> >>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: >>>> >>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>> Service catalog. >>>>> >>>> >>>> Thanks Adam, will add it to the list. I see it's been open since 2012! >>>> >>>> >>>>> >>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds wrote: >>>>> >>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>> >>>>>> Are there any projects (new and old) that would like to make use of >>>>>> the security SIG for either gaining another perspective on security >>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>> collaboration? >>>>>> >>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>> wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> I am seeking topics for the PTG from all projects, as this will be >>>>>>> where we try out are new form of being a SIG. >>>>>>> >>>>>>> For this PTG, we hope to facilitate more cross project collaboration >>>>>>> topics now that we are a SIG, so if your project has a security need / >>>>>>> problem / proposal than please do use the security SIG room where a larger >>>>>>> audience may be present to help solve problems and gain x-project consensus. >>>>>>> >>>>>>> Please see our PTG planning pad [0] where I encourage you to add to >>>>>>> the topics. >>>>>>> >>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>> >>>>>>> -- >>>>>>> Luke Hinds >>>>>>> Security Project PTL >>>>>>> >>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Sat Feb 10 05:56:28 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Sat, 10 Feb 2018 05:56:28 -0000 Subject: [openstack-dev] [heat] heat 10.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for heat for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/heat/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/heat/log/?h=stable/queens Release notes for heat can be found at: http://docs.openstack.org/releasenotes/heat/ From lbragstad at gmail.com Sat Feb 10 16:07:33 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sat, 10 Feb 2018 10:07:33 -0600 Subject: [openstack-dev] [keystone] Keystone Team Update - Weeks of 29 January and 5 February 2018 In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 4:51 PM, Colleen Murphy wrote: > # Keystone Team Update - Weeks of 29 January and 5 February 2018 > > It's been a busy couple of weeks and I missed the last update, here's > an update for the last two weeks. > > ## News > > ### RC1 > > RC1 was cut today[1]. We expect to release an RC2 after branching > since we have a translations patch and a couple of bugfixes that we > hope to get in. > > ### PTG Planning > > We're finalizing topics to cover during the cross-project days of the > PTG[2]. > > [1] https://review.openstack.org/#/c/542385/ > [2] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg > > ## Open Specs > > Search query: https://goo.gl/pc8cCf > > We don't have any new specs proposed currently but Adrian has hinted > that an interesting one is on its way[3]. > > [3] http://lists.openstack.org/pipermail/openstack-operators/ > 2018-February/014852.html > > ## Recently Merged Changes > > Search query: https://goo.gl/hdD9Kw > > We merged 33 changes this last week, 76 since the last update > newsletter. These included the last of our api-ref reorganization > changes, cleanup of v2 cruft and documentation, and some major > bugfixes. > > ## Changes that need Attention > > Search query: https://goo.gl/tW5PiH > > There are 25 changes that are passing CI, not in merge conflict, have > no negative reviews and aren't proposed by bots. Please focus reviews > on release-critical bugs. > > ## Milestone Outlook > > https://releases.openstack.org/queens/schedule.html > > We've released our RC1 and we are in hard string freeze. We have two > more weeks to make another RC release. > > ## Shout-outs > > Thanks to our Outreachy intern Suramya for completing our api-ref > reorganization! This was a big step in making our API reference more > useable. Of course we have many more things for you to help us with ;) > ++ This was a big undertaking and it's awesome to see it completed. Thanks, Suramya! > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: > https://etherpad.openstack.org/p/keystone-team-newsletter > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Sat Feb 10 17:06:46 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Sat, 10 Feb 2018 12:06:46 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Weeks of 29 January and 5 February 2018 In-Reply-To: References: Message-ID: Very cool Suramya! On Sat, Feb 10, 2018 at 11:07 AM, Lance Bragstad wrote: > > > On Fri, Feb 9, 2018 at 4:51 PM, Colleen Murphy wrote: >> >> # Keystone Team Update - Weeks of 29 January and 5 February 2018 >> >> It's been a busy couple of weeks and I missed the last update, here's >> an update for the last two weeks. >> >> ## News >> >> ### RC1 >> >> RC1 was cut today[1]. We expect to release an RC2 after branching >> since we have a translations patch and a couple of bugfixes that we >> hope to get in. >> >> ### PTG Planning >> >> We're finalizing topics to cover during the cross-project days of the >> PTG[2]. >> >> [1] https://review.openstack.org/#/c/542385/ >> [2] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg >> >> ## Open Specs >> >> Search query: https://goo.gl/pc8cCf >> >> We don't have any new specs proposed currently but Adrian has hinted >> that an interesting one is on its way[3]. >> >> [3] >> http://lists.openstack.org/pipermail/openstack-operators/2018-February/014852.html >> >> ## Recently Merged Changes >> >> Search query: https://goo.gl/hdD9Kw >> >> We merged 33 changes this last week, 76 since the last update >> newsletter. These included the last of our api-ref reorganization >> changes, cleanup of v2 cruft and documentation, and some major >> bugfixes. >> >> ## Changes that need Attention >> >> Search query: https://goo.gl/tW5PiH >> >> There are 25 changes that are passing CI, not in merge conflict, have >> no negative reviews and aren't proposed by bots. Please focus reviews >> on release-critical bugs. >> >> ## Milestone Outlook >> >> https://releases.openstack.org/queens/schedule.html >> >> We've released our RC1 and we are in hard string freeze. We have two >> more weeks to make another RC release. >> >> ## Shout-outs >> >> Thanks to our Outreachy intern Suramya for completing our api-ref >> reorganization! This was a big step in making our API reference more >> useable. Of course we have many more things for you to help us with ;) > > > ++ > > This was a big undertaking and it's awesome to see it completed. Thanks, > Suramya! > >> >> >> ## Help with this newsletter >> >> Help contribute to this newsletter by editing the etherpad: >> https://etherpad.openstack.org/p/keystone-team-newsletter >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From soulxu at gmail.com Sun Feb 11 02:48:51 2018 From: soulxu at gmail.com (Alex Xu) Date: Sun, 11 Feb 2018 10:48:51 +0800 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: +1 2018-02-09 23:01 GMT+08:00 Matt Riedemann : > I'd like to add Takashi to the python-novaclient core team. > > python-novaclient doesn't get a ton of activity or review, but Takashi has > been a solid reviewer and contributor to that project for quite awhile now: > > http://stackalytics.com/report/contribution/python-novaclient/180 > > He's always fast to get new changes up for microversion support and help > review others that are there to keep moving changes forward. > > So unless there are objections, I'll plan on adding Takashi to the > python-novaclient-core group next week. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eyalb1 at gmail.com Sun Feb 11 11:55:18 2018 From: eyalb1 at gmail.com (Eyal B) Date: Sun, 11 Feb 2018 13:55:18 +0200 Subject: [openstack-dev] [OpenStack][Vitrage] .success error on vitrage-dashboard In-Reply-To: <01a701d3a19e$209a1570$61ce4050$@ssu.ac.kr> References: <01a701d3a19e$209a1570$61ce4050$@ssu.ac.kr> Message-ID: Hi, Yes this is a bug due to the upgrade of angular the function that was deprecated in now removed We will push a fix Thanks Eyal On 9 February 2018 at 14:04, MinWookKim wrote: > Hello Vitrage. > > I installed the vitrage and vitrage-dashboard master versions and tested > them. > > However, an unrecognized error ('.success () is not function') occurs and > all panels of the vitrage-dashboard do not appear normally. > > I can not figure out the cause, but I changed the .success and .error of > each function to .then and .catch in dashboard / static / dashboard / > projct / services / vitrage_topology.service.js. > > As a result of this, I have confirmed the normal operation of the > vitrage-dashboard panel. > > What is the cause? > > Thanks J > > > Best Regards, > > > Minwook. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eyalb1 at gmail.com Sun Feb 11 14:17:45 2018 From: eyalb1 at gmail.com (Eyal B) Date: Sun, 11 Feb 2018 16:17:45 +0200 Subject: [openstack-dev] [OpenStack][Vitrage] .success error on vitrage-dashboard Message-ID: Hi, The bug was fixed both in queens and in master https://review.openstack.org/#/c/543224/ https://review.openstack.org/#/c/543223/ Thanks Eyal On 11 February 2018 at 13:55, Eyal B wrote: > Hi, > > Yes this is a bug due to the upgrade of angular the function that was > deprecated in now removed > We will push a fix > > Thanks > Eyal > > On 9 February 2018 at 14:04, MinWookKim wrote: > >> Hello Vitrage. >> >> I installed the vitrage and vitrage-dashboard master versions and tested >> them. >> >> However, an unrecognized error ('.success () is not function') occurs and >> all panels of the vitrage-dashboard do not appear normally. >> >> I can not figure out the cause, but I changed the .success and .error of >> each function to .then and .catch in dashboard / static / dashboard / >> projct / services / vitrage_topology.service.js. >> >> As a result of this, I have confirmed the normal operation of the >> vitrage-dashboard panel. >> >> What is the cause? >> >> Thanks J >> >> >> Best Regards, >> >> >> Minwook. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.decandia at gmail.com Sun Feb 11 16:01:48 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Sun, 11 Feb 2018 10:01:48 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it from the slides. On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candia wrote: > Hi Folks, > > here are the slides for the Tatu presentation: https://docs. > google.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM > > I meant to record the demo video as well but I haven't gotten around to > editing all the bits. Please stay tuned. > > thanks, > Pino > > > On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < > giuseppe.decandia at gmail.com> wrote: > >> Hi Luke, >> >> Fantastic! An hour would be great if the schedule allows - there are lots >> of different aspects we can dive into and potential future directions the >> project can take. >> >> thanks! >> Pino >> >> >> >> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >> >>> >>> >>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>> giuseppe.decandia at gmail.com> wrote: >>> >>>> Hi Folks, >>>> >>>> I know the request is very late, but I wasn't aware of this SIG until >>>> recently. Would it be possible to present a new project to the Security SIG >>>> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >>>> project, sign on users and contributors and get feedback. >>>> >>>> For the past few months I have been working on a new project - Tatu >>>> [1]- to automate the management of SSH certificates (for both users and >>>> hosts) in OpenStack. Tatu allows users to generate SSH certificates with >>>> principals based on their Project role assignments, and VMs automatically >>>> set up their SSH host certificate (and related config) via Nova vendor >>>> data. The project also manages bastions and DNS entries so that users don't >>>> have to assign Floating IPs for SSH nor remember IP addresses. >>>> >>>> I have a working demo (including Horizon panels [2] and OpenStack CLI >>>> [3]), but am still working on the devstack script and patches [4] to get >>>> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >>>> demo video in the next few days. >>>> >>>> best regards, >>>> Pino >>>> >>>> >>>> References: >>>> >>>> 1. https://github.com/pinodeca/tatu (Please note this is still very >>>> much a work in progress, lots of TODOs in the code, very little testing and >>>> documentation doesn't reflect the latest design). >>>> 2. https://github.com/pinodeca/tatu-dashboard >>>> 3. https://github.com/pinodeca/python-tatuclient >>>> 4. https://review.openstack.org/#/q/tatu >>>> >>>> >>>> >>>> >>> Hi Giuseppe, of course you can! I will add you to the agenda. We could >>> get your an hour if it allows more time for presenting and post discussion? >>> >>> We will be meeting in an allocated room on Monday (details to follow). >>> >>> https://etherpad.openstack.org/p/security-ptg-rocky >>> >>> Luke >>> >>> >>> >>> >>>> >>>> >>>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds wrote: >>>> >>>>> >>>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young wrote: >>>>> >>>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>>> Service catalog. >>>>>> >>>>> >>>>> Thanks Adam, will add it to the list. I see it's been open since 2012! >>>>> >>>>> >>>>>> >>>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >>>>>> wrote: >>>>>> >>>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>>> >>>>>>> Are there any projects (new and old) that would like to make use of >>>>>>> the security SIG for either gaining another perspective on security >>>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>>> collaboration? >>>>>>> >>>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>>> wrote: >>>>>>> >>>>>>>> Hello All, >>>>>>>> >>>>>>>> I am seeking topics for the PTG from all projects, as this will be >>>>>>>> where we try out are new form of being a SIG. >>>>>>>> >>>>>>>> For this PTG, we hope to facilitate more cross project >>>>>>>> collaboration topics now that we are a SIG, so if your project has a >>>>>>>> security need / problem / proposal than please do use the security SIG room >>>>>>>> where a larger audience may be present to help solve problems and gain >>>>>>>> x-project consensus. >>>>>>>> >>>>>>>> Please see our PTG planning pad [0] where I encourage you to add to >>>>>>>> the topics. >>>>>>>> >>>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>>> >>>>>>>> -- >>>>>>>> Luke Hinds >>>>>>>> Security Project PTL >>>>>>>> >>>>>>> >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.decandia at gmail.com Sun Feb 11 16:57:10 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Sun, 11 Feb 2018 10:57:10 -0600 Subject: [openstack-dev] [infra] Please add me to Tatu's Gerrit groups In-Reply-To: <20180209192126.6hzzwqha36eda64l@yuggoth.org> References: <20180209192126.6hzzwqha36eda64l@yuggoth.org> Message-ID: Thanks! On Fri, Feb 9, 2018 at 1:21 PM, Jeremy Stanley wrote: > On 2018-02-09 10:00:25 -0600 (-0600), Pino de Candia wrote: > > I'd like to be added to the recently created tatu-core and > > tatu-release Gerrit groups. > > Since your Gerrit account is the one which proposed the change to > add the project whose ACLs use those groups, I have added you as the > initial member in both of them. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sun Feb 11 21:45:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 11 Feb 2018 15:45:09 -0600 Subject: [openstack-dev] [nova] Regression bug for boot from volume with IsolatedHostsFilter Message-ID: I triaged this bug a couple of weeks ago: https://bugs.launchpad.net/nova/+bug/1746483 It looks like it's been regressed since Mitaka when that filter started using the RequestSpec object rather than legacy filter_properties dict. Looking a bit deeper though, it looks like this filter never worked for volume-backed instances. That's because this code, called from the compute API, never takes the image_id out of the volumes "volume_image_metadata": https://github.com/openstack/nova/blob/fa6c0f9cb14f1b4ce4d9b1dbacb1743173089986/nova/utils.py#L1032 So before the regression that breaks the filter, the filter just never got the image.id to validate and accepted whatever host for that instance since it didn't know the image to tell if it was isolated or not. I've got a functional recreate test for the bug and I think it's a pretty easy fix, but a question comes up about backports, which is - do we do two fixes for this bug, one to backport to stable which is just handling the missing RequestSpec.image.id attribute in the filter so the filter doesn't explode? Then we do another fix which actually pulls the image_id off the volume_image_metadata and put that properly into the RequestSpec so the filter actually _works_ with volume-backed instances? That would technically be a change in behavior for the filter, albeit likely the correct thing to do all along but we just never did it, and apparently no one ever noticed or cared (it's not a default enabled filter after all). -- Thanks, Matt From adriant at catalyst.net.nz Sun Feb 11 23:37:17 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Mon, 12 Feb 2018 12:37:17 +1300 Subject: [openstack-dev] [Openstack-operators] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack In-Reply-To: <9bc0878f-284f-c6f0-2999-8c53dc6f183e@gmail.com> References: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> <9bc0878f-284f-c6f0-2999-8c53dc6f183e@gmail.com> Message-ID: <0c5b5347-bf91-e642-862e-34ffc3d91dc4@catalyst.net.nz> On 09/02/18 15:50, Lance Bragstad wrote: > On 02/08/2018 03:36 PM, Adrian Turjak wrote: >> My plan for the Rocky cycle is to work in Keystone and address the missing pieces I need to get MFA working properly throughout OpenStack in an actually useful way, and I'll provide updates for that once I have the specs ready to submit (am waiting until start of Rocky for that). The good thing, is that this current solution for MFA works, and it can be migrated from to the methods I intend to work on for Rocky. The same credential models will be used in Keystone, and I will write tools to take users with TOTP credentials and configure auth rules for them for more official MFA support in Keystone once it is useful. > Are you planning to revive the previous proposal [0]? We should have > stable/queens branch by EOW, so Rocky development will be here soon. Are > you planning on attending the PTG? It might be valuable to discuss what > you have and how we can integrate it upstream. I thought I remember the > issue being policy related (where admins were required to update user > secrets and it wasn't necessarily a self-serving API). Now that we're in > a better place with system-scope, we might be able to move the ball > forward a bit regarding your use case. > > [0] https://review.openstack.org/#/c/345705/ So the use case is not just self-management, that's a part of it, but one at least we've solved outside of Keystone. The bigger issue is that MFA as we currently have it in Keystone is... unfinished and very hard to consume. And no I won't be coming to the PTG. :( The multi-auth-method approach is good, as are the per user auth rules, but right now nothing is consuming it using more than one method. In fact KeystoneAuth doesn't know how to deal with it. In part that is my fault since I put my hand up to make KeystoneAuth work with Multi-method auth, but... I gave up because it got ugly fast. We could make auth methods in KeystoneAuth that are made up of multiple methods, but then you need an explicit auth module for each combination... We need to refactor that code to allow you to specify a combination and have the code underneath do the right thing. The other issue is that you always need to know ahead of time how to auth for a given user and their specific auth rules, and you can't programmatically figure that out. The missing piece is something that allows us to programmatically know what is missing when 1 out of 2+ auth rules succeeds. When a user with more than 1 auth rule attempts to auth to Keystone, if they auth with 1 rule, but need 2 (password and totp), then the auth will fail and the error will be unhelpful. Even if the error was helpful, we can't rely on parsing error messages, that's unsafe. What should happen is Keystone acknowledges they were successful with one of their configured auth rules, at which point we know this user is 'probably' who they say they are. We now pass them a Partially Authed Token, which says they've already authed with 'password', but are missing 'totp' to complete their auth. The user can now return that token, and the missing totp auth method, and get back a full token. So the first spec I intend to propose is the Partially Authed Token type. Which solves the challenge response problem we have, and lets us actually know how to proceed when auth is unfinished. Once we have that, we can update KeystoneAuth, then the CLI to support challenge response, and then Horizon as well. Then we can look at user self management of MFA. Amusingly the very original spec that brought it multi-auth methods into Keystone talked about the need for a 'half-token': https://adam.younglogic.com/2012/10/multifactor-auth-and-keystone/ https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn https://review.openstack.org/#/c/21487/ But the 'half-token' was never implemented. :( The MFA method in this original email was just... replace the password auth method with one that expects an appended totp passcode. It's simple, it doesn't break anything nor expect more than one auth method, it works with Horizon and the CLI because of that, but it doesn't do real challenge response. It's a stop gap measure for us since we're on an older version of Keystone, and because the current methods are too hard for our customers to actually consume. And most importantly, I can migrate users from using it to using user auth rules, since Keystone already stores the totp credential and all I need to then do is make auth rules for users with a totp cred. Hope that explains my plan, and why I'm going to be proposing it. It's going to be a lot of work. :P From tovin07 at gmail.com Mon Feb 12 02:14:51 2018 From: tovin07 at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gVHLhu41uZyBWxKluaCAoVG92aW4gU2V2ZW4p?=) Date: Mon, 12 Feb 2018 02:14:51 +0000 Subject: [openstack-dev] [FFE][requirements][release][oslo] osprofiler bug fix needed Message-ID: Hello, Currently, Oslo release for Queens is out. However, OSProfiler faces an issue that make some Nova CLI command not working. Detail for this issue: https://launchpad.net/bugs/1743586 Patch that fix this bug: https://review.openstack.org/#/c/535219/ Back port for this: https://review.openstack.org/#/c/537735/ Release new version for OSProfiler with this bug fix in Queens: https://review.openstack.org/#/c/541645/ Therefore, I send this email to get a FFE for it. Thank you! -- Best, Tovin -------- Nguyễn Trọng Vĩnh (Tovin Seven) Email: tovin07 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaronzhu1121 at gmail.com Mon Feb 12 02:45:13 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Mon, 12 Feb 2018 10:45:13 +0800 Subject: [openstack-dev] [murano] Next 2 team meetings canceled Message-ID: Hi Teams, Let's cancel meetings at 13 Feb and 20 Feb because of Chinese Spring Festival. Cheers, Rong Zhu From hongbin034 at gmail.com Mon Feb 12 04:18:10 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 11 Feb 2018 23:18:10 -0500 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name Message-ID: Hi all, I was working on the OSC plugin of my project and trying to choose a CLI option to represent the availability zone of the container. When I came across the existing commands, I saw some inconsistencies on the naming. Some commands use the syntax '--zone ', while others use the syntax '--availability-zone '. For example: * openstack host list ... [--zone ] * openstack aggregate create ... [--zone ] * openstack volume create ... [--availability-zone ] * openstack consistency group create ... [--availability-zone ] I wonder if it makes sense to address this inconsistency. Is it possible have all commands using one syntax? Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From skramaja at redhat.com Mon Feb 12 05:45:58 2018 From: skramaja at redhat.com (Saravanan KR) Date: Mon, 12 Feb 2018 11:15:58 +0530 Subject: [openstack-dev] [tripleo][nova][neutron] changing the default qemu group in tripleo Message-ID: Hello, With OvS2.8, the USER and GROUP in which ovs will run has been changed to openvswitch:openvswitch (for regular ovs builds) and openvswitch:hugetlbfs (for DPDK enable ovs builds). Since for fedora family, we have always DPDK enabled builds, all TripleO deployments will have OvS running with openvswitch:hugetlbfs. For DPDK, qemu should also run with the same group "hugetlbfs" so that the vhost sockets could be shared between qemu and openvswitch. So we are making the change to set "group" in /etc/libvirt/qemu.conf to "hugetlbfs" for DPDK deployments. And it is all working fine. Now the question is - should we make qemu run with same group for all the nodes of the deployment [or] only the nodes which have DPDK enabled? It is possible for the DPDK nodes to host non-DPDK VMs (like SR-IOV or regular tenant VMs). So all VMs will be running with "qemu:hugetlbfs" user and group. So to avoid conflicts of running different group on different roles of a TripleO deployment, I prefer to update the qemu group as "hugetlbfs" for all the nodes of all roles, if DPDK is enabled in the deployment. Let us know if you see any issues on this approach? Regards, Saravanan KR From pratapagoutham at gmail.com Mon Feb 12 06:44:06 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Mon, 12 Feb 2018 12:14:06 +0530 Subject: [openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator In-Reply-To: References: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> Message-ID: Hi, Zane, Sorry for the late reply I was on leave for a couple of days. Firstly, Thanks for the clear in detail analysis and suggestions on quotas and resources-management it really means a lot to us :). Secondly, these are the use-cases which kingbird is mainly developed for. *OUR USE-CASES QUOTA-MANAGEMENT:* 1. Admin must have a global view of all quotas to all tenants across all the regions 2. Admin can periodically balance the quotas (we have a formula using which we do this balancing ) across regions 3. Admin can update, Delete quotas for tenants 4. Admin can sync quotas for all tenants so that the quotas will be updated in all regions. *USE-CASES FOR RESOURCE-MANAGEMENT:* 1. Resources which are required to boot up a VM in One region should be accessible in other target-regions In the process, Kingbird has support for the following a) Sync/Replicate existing Nova-Keypairs b) Sync/Replicate existing Glance-Images c) Sync/Replicate existing Nova-Flavors.(Only admin can sync these.) 2. User who has a VM in one region should have the ease or possibility to have a replica of the same vm in target-region(s) a) It can be a snapshot of the already booted-up VM or with the same qcow2 image. *GENERIC USE-CASES* 1. Automation scripts for kingbird in -ansible, -salt -puppet. 2. Add SSL support to kingbird 3. Resource management in Kingbird-dashboard. 4. Kingbird in a docker 5. Add Kingbird into Kolla. On Fri, Feb 9, 2018 at 12:47 AM, Zane Bitter wrote: > On 07/02/18 12:24, Goutham Pratapa wrote: > >> >Yes as you said it can be interpreted as a tool that can >> orchestrate multiple-regions. >> > > Actually from your additional information I'm now getting the impression > that you are, in fact, positioning this as a partial competitor to Heat. >To some extent yes, Till now we have focused on resource-synchronization > and quota-balancing for various tenants across multiple-regions. But in the > coming cycle we want to enter the orchestration game. > > Just to be sure does openstack already has project which can >> replicate the resources and orchestrate??? >> > > OpenStack has an orchestration service - Heat - and it allows you to do > orchestration across multiple regions by creating a nested Stack in an > arbitrary region as a resource in a Heat Stack.[1] > > Heat includes the ability to create Nova keypairs[2] and even, for those > users with sufficient privileges, flavors[3] and quotas[4][5][6]. (It used > to be able to create Glance images as well, but this was deprecated because > it is not feasible using the Glance v2 API.) > > [1] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Heat::Stack > [2] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::KeyPair > [3] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::Flavor > [4] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Nova::Quota > [5] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Cinder::Quota > [6] https://docs.openstack.org/heat/latest/template_guide/openst > ack.html#OS::Neutron::Quota > > why because In coming >> cycle our idea is that a user just gives a VM-ID or Vm-name and we >> sync all the resources with which the vm is actually created >> ofcourse we cant have the same network in target-region so we may >> need the network-id or port-id from the target region from user so >> that kingbird will boot up the requested vm in the target region(s). >> > > So it sounds like you are starting from the premise that users will create > stuff in an ad-hoc way, then later discover that they need to replicate > their ad-hoc deployments to multiple regions, and you're building a tool to > do that. Heat, on the other hand, starts from the premise that users will > invest a little up-front effort to create a declarative definition of their > deployment, which they can then deploy repeatably in multiple (or the > same!) regions. Our experience is that people have shown themselves to be > quite willing to do this, because repeatable deployments have lots of > benefits. > Yes that is true. But, our idea is the same as what you have stated above > ` *So it sounds like you are starting from the premise that users will > create stuff in an ad-hoc way, then later discover that they need to > replicate their ad-hoc deployments to multiple regions *` to reduce the > repeatable deployments. > > Looking at the things you want to synchronise: > > * Quotas > > Synchronize after balancing quotas across regions. (our use-case is if an > admin user wants to know the global limit for a tenant across regions then > he can view, update and delete from one region using Kingbird.) > > Operators can already use Heat templates to manage these if they so desire. > > * Flavors > > Some clouds allow users to create flavors, and those users can use Heat > templates to manage them already. > > > > Operators can *not* use Heat templates to manage flavors in the same way > that that can with quotas, because the OS::Nova::Flavor resource was > designed with the above use-case in mind instead. (Specifically, it doesn't > allow you to set the name.) Support has been requested for it in the past, > however, and given the other kinds of admin-only resources we have in Heat > (Quotas, Keystone resources) it would be consistent to modify > OS::Nova::Flavor to allow this additional use case. > > Yes, it is true but we thought of handling these issues along with our > use-cases. > > It's possible that operators could benefit from better/other tooling for > Flavors and Quotas. In fact, the reason I've pushed back against some of > the admin-facing stuff in Heat is that it often seems to me that Heat is an > awkward tool for managing global-singleton or tenant-local-singleton > administrator resources. It's definitely fine for multiple tools to > co-exist, although a separate OpenStack service with an API seems like it > could be overkill to me. > > Our idea is the same `manage adminstrator resource` > > * Keypairs > > This is a non-issue IMHO. > > * Images > > I agree with what I think Jay is suggesting here - not that there should > be a single global Glance handling multiple regions (locality is important > for images), but definitely some sort of multi-region support in Glance > (e.g. a built-in way to automatically replicate an image to other regions) > would be a better solution than an external service doing it. Glance is > always looking for new contributors :) > > We definetly would love to try that and if possible contribute to glance. > > Though I really think the problem here is that there aren't good ways to > automate image upload in general with the Glance v2 API; the multiregion > part is just a for-loop. Allowing Glance to download an image from a URL > (or even if it were limited to Swift objects) instead of having to upload > one to it would allow us to resurrect OS::Glance::Image in Heat. > > Kingbird does`not` download image from a URL and then uploads it to > glance rather it uses the existing image and the replicates it into the > other region . > > Kingbird can also Sync Vm snapshot.(Yet to be committed. ) > > > https://github.com/openstack/kingbird/blob/master/kingbird/drivers/openstack/glance_v2.py#L149 > > * Other user resources > > These are already handled, in a much more general way, by Heat. > > > Honestly, it seems like a lot of wheels are being reinvented here. I think > it would be more productive to start with a list of use cases and see > whether the gaps can be covered by changes to existing services that they > would consider in-scope. > > Kingbird does have many features like quota-management and > resource-management of which one is the Multi-region Orchestration. > > > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > We really thank you for all the suggestions this definitely gives us a way forward. :) -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaohe.feng at intel.com Mon Feb 12 07:05:52 2018 From: shaohe.feng at intel.com (Feng, Shaohe) Date: Mon, 12 Feb 2018 07:05:52 +0000 Subject: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion. Message-ID: <7B5303F69BB16B41BB853647B3E5BD7054026FB2@SHSMSX101.ccr.corp.intel.com> Now I am working on an FPGA management POC with Dolpher. We have finished some code, and have discussion with Li Liu and some cyborg developer guys. Here are some discussions: image management 1. User should upload the FPGA image to glance and set the tags as follow: There are two suggestions to upload an FPGA image. A. use raw glance api like: $ openstack image create --file mypath/FPGA.img fpga.img $ openstack image set --tag FPGA --property vendor=intel --property type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22 The image must have "FPGA" tag and accelerator type(such as type=crypto). B. cyborg support a new api to upload a image. This API will wrap glance api and include the above steps, also make image record in it's local DB. 2. Cyborg agent/conductor get the FPGA image info from glance. There are also two suggestions to get the FPGA image info. A. use raw glance api. Cyborg will get the images by FPGA tag and timestamp periodically and store them in it's local cache. It will use the images tags and properties to form placement taits and resource_class name. B. store the imformations when call cybort's new upload API. 3. Image download. call glance image download API to local file. and make a corresponding md5 files for checksum. GAP in image management: missing related glance image client in cyborg. resource report management for scheduler. 1. Cyborg agent/conductor need synthesize all useful information from FPGA driver and image information. The traits will be like: CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO, The resource_class will be like: CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF {"inventories": "CUSTOM_FPGA_INTEL_PF": { "allocation_ratio": 1.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 4 } } Accelerator claim and release: 1. Cybort will support the releated API for accelerator claim and release. It can pass the follow parameters: nodename: Which host that accelerator located on, it is required. type: This accelerator type, cyborg can get image uuid by it. it is optional. image uuid: the uuid of FPGA bitstream image, . it is optional. traits: the traits info that cyborg reports to placement. resource_class: the resource_class name that reports to placement. And return the address for the accelerator. At present, it is the PCIE_ADDRESS. 2. When claim an accelerator, type and image is None, cybort will not program the fpga for user. FPGA accelerator program API: We still need to support an independent program API for some specific scenarios. Such as as a FPGA developer, I will change my verilog logical frequently and need to do verification on my guest. I upload my new bitstream image to glance, and call cyborg to program my FPGA accelerator. End user operations follow: 1. upload an bitstream image to glance if necessary and set its tags(at least FPGA is requied) and property. sucn as: --tag FPGA --property vendor=intel --property type=crypto 2. list the FPGA related traits and resource_class names by placement API. such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits. 3. create a new falvor wiht his expected traits and resource_class as extra spec. such as: "resourcesn:CUSTOM_FPGA_INTEL_PF=2" n is an integer or empty string. "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO". 4. create the VM with this flavor. BR Shaohe Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Feb 12 07:56:56 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 12 Feb 2018 07:56:56 +0000 Subject: [openstack-dev] [neutron][dynamic-routing] Broken unit tests Message-ID: Hi, Can cores please look at https://review.openstack.org/543208. We are currently blocked on master and stable/queens with this. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon Feb 12 08:45:32 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 12 Feb 2018 08:45:32 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: On Sun, Feb 11, 2018 at 4:01 PM, Pino de Candia wrote: > I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it > from the slides. > Thanks Pino , i added these to the agenda: https://etherpad.openstack.org/p/security-ptg-rocky Please let me know before the PTG, if it will be your colleague or if we need to find a projector to conference you in. > On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candia < > giuseppe.decandia at gmail.com> wrote: > >> Hi Folks, >> >> here are the slides for the Tatu presentation: https://docs.goo >> gle.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM >> >> I meant to record the demo video as well but I haven't gotten around to >> editing all the bits. Please stay tuned. >> >> thanks, >> Pino >> >> >> On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> Hi Luke, >>> >>> Fantastic! An hour would be great if the schedule allows - there are >>> lots of different aspects we can dive into and potential future directions >>> the project can take. >>> >>> thanks! >>> Pino >>> >>> >>> >>> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >>> >>>> >>>> >>>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>>> giuseppe.decandia at gmail.com> wrote: >>>> >>>>> Hi Folks, >>>>> >>>>> I know the request is very late, but I wasn't aware of this SIG until >>>>> recently. Would it be possible to present a new project to the Security SIG >>>>> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >>>>> project, sign on users and contributors and get feedback. >>>>> >>>>> For the past few months I have been working on a new project - Tatu >>>>> [1]- to automate the management of SSH certificates (for both users and >>>>> hosts) in OpenStack. Tatu allows users to generate SSH certificates with >>>>> principals based on their Project role assignments, and VMs automatically >>>>> set up their SSH host certificate (and related config) via Nova vendor >>>>> data. The project also manages bastions and DNS entries so that users don't >>>>> have to assign Floating IPs for SSH nor remember IP addresses. >>>>> >>>>> I have a working demo (including Horizon panels [2] and OpenStack CLI >>>>> [3]), but am still working on the devstack script and patches [4] to get >>>>> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >>>>> demo video in the next few days. >>>>> >>>>> best regards, >>>>> Pino >>>>> >>>>> >>>>> References: >>>>> >>>>> 1. https://github.com/pinodeca/tatu (Please note this is still >>>>> very much a work in progress, lots of TODOs in the code, very little >>>>> testing and documentation doesn't reflect the latest design). >>>>> 2. https://github.com/pinodeca/tatu-dashboard >>>>> 3. https://github.com/pinodeca/python-tatuclient >>>>> 4. https://review.openstack.org/#/q/tatu >>>>> >>>>> >>>>> >>>>> >>>> Hi Giuseppe, of course you can! I will add you to the agenda. We could >>>> get your an hour if it allows more time for presenting and post discussion? >>>> >>>> We will be meeting in an allocated room on Monday (details to follow). >>>> >>>> https://etherpad.openstack.org/p/security-ptg-rocky >>>> >>>> Luke >>>> >>>> >>>> >>>> >>>>> >>>>> >>>>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds >>>>> wrote: >>>>> >>>>>> >>>>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young >>>>>> wrote: >>>>>> >>>>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>>>> Service catalog. >>>>>>> >>>>>> >>>>>> Thanks Adam, will add it to the list. I see it's been open since 2012! >>>>>> >>>>>> >>>>>>> >>>>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >>>>>>> wrote: >>>>>>> >>>>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>>>> >>>>>>>> Are there any projects (new and old) that would like to make use of >>>>>>>> the security SIG for either gaining another perspective on security >>>>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>>>> collaboration? >>>>>>>> >>>>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hello All, >>>>>>>>> >>>>>>>>> I am seeking topics for the PTG from all projects, as this will be >>>>>>>>> where we try out are new form of being a SIG. >>>>>>>>> >>>>>>>>> For this PTG, we hope to facilitate more cross project >>>>>>>>> collaboration topics now that we are a SIG, so if your project has a >>>>>>>>> security need / problem / proposal than please do use the security SIG room >>>>>>>>> where a larger audience may be present to help solve problems and gain >>>>>>>>> x-project consensus. >>>>>>>>> >>>>>>>>> Please see our PTG planning pad [0] where I encourage you to add >>>>>>>>> to the topics. >>>>>>>>> >>>>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Luke Hinds >>>>>>>>> Security Project PTL >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ____________________________________________________________ >>>>>>>> ______________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>> enstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>> >>> >>> >> > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Feb 12 09:44:26 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 12 Feb 2018 16:44:26 +0700 Subject: [openstack-dev] [mistral] PTG planning on Friday =?utf-8?Q?=E2=80=9Coffice_hours=E2=80=9D_?=session Message-ID: <90fe5921-dad3-4884-8e43-eede625d95f2@Spark> Hi, This Friday at 8.00 UTC we’ll have our first “Office hours” session according to the time slots earlier proposed in this email thread. It will be devoted to the Dublin PTG planning so please join us at #openstack-mistral if you want to participate (bring your items, get to know things going on etc.) And I’m still hoping that more people will give their feedback on the proposal itself. Just to remind what the proposed time slots for office hours are: 1. Mon 16.00 UTC (it used to be our time of weekly meetings) 2. Wed 3.00 UTC 3. Fri 8.00 UTC Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From apetrich at redhat.com Mon Feb 12 09:47:14 2018 From: apetrich at redhat.com (Adriano Petrich) Date: Mon, 12 Feb 2018 09:47:14 +0000 Subject: [openstack-dev] [mistral] Proposing time slots for Mistral office hours In-Reply-To: References: <9580a64c-095b-49dd-a117-8f4e4a200022@Spark> Message-ID: I'm good for helping with the Monday and Friday time slots Cheers, Adriano On Mon, Feb 5, 2018 at 3:23 PM, Dougal Matthews wrote: > > > On 5 February 2018 at 07:48, Renat Akhmerov > wrote: > >> Hi, >> >> Not so long ago we decided to stop holding weekly meetings in one of the >> general IRC channel (it was #openstack-meeting-3 for the last several >> months). The main reason was that we usually didn’t have a good >> representation of the team there because the team is distributed across the >> world. We tried to find a time slot several times that would work well for >> all the team members but failed to. Another reason is that we didn’t always >> have a clear reason to gather because everyone was just focused on their >> tasks and a discussion wasn’t much needed so a meeting was even a >> distraction. >> >> However, despite all this we still would like channels to communicate, >> the team members and people who have user questions and/or would like to >> start contributing. >> >> Similarly to other teams in OpenStack we’d like to try the “Office hours” >> concept. If we follow it we’re supposed to have team members, for whom the >> time slot is OK, available in our channel #openstack-mistral during certain >> hours. These hours can be used for discussing our development stuff between >> team members from different time zones and people outside the team would >> know when they can come and talk to us. >> >> Just to start the discussion on what the office hours time slots could be >> I’m proposing the following time slots: >> >> 1. Mon 16.00 UTC (it used to be our time of weekly meetings) >> 2. Wed 3.00 UTC >> 3. Fri 8.00 UTC >> >> > These sounds good to me. I should be able to regularly attend the Monday > and Friday slots. > > I think we should ask Mistral cores to try and attend at least one of > these a week. > > > >> >> >> Each slot is one hour. >> >> Assumingly, #1 would be suitable for people in Europe and America. #2 for >> people in Asia and America. And #3 for people living in Europe and Asia. At >> least that was my thinking when I was wondering what the time slots should >> be. >> >> Please share your thoughts on this. The idea itself and whether the time >> slots look ok. >> >> Thanks >> >> Renat Akhmerov >> @Nokia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcerami at redhat.com Mon Feb 12 10:24:26 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Mon, 12 Feb 2018 10:24:26 +0000 Subject: [openstack-dev] [reno] an alternative approach to known issues In-Reply-To: <4e4eace4-9f7f-e6e3-d3b8-796f2414f7fb@openstack.org> References: <20180208224356.os3z5qqqcvo53xtp@localhost> <1518186260-sup-4136@lrrr.local> <4e4eace4-9f7f-e6e3-d3b8-796f2414f7fb@openstack.org> Message-ID: <20180212102426.7ui2a7nlqokf2mah@localhost> On 09 Feb, Thierry Carrez wrote: > Doug Hellmann wrote: > > What makes reno a good fit for this task? It seems like updating a > > regular documentation page in the source tree would work just as well, > > since presumably these technical debt descriptions don't need to be > > backported to stable branches. > > Yeah it feels like reno would add complexity for little benefit in that > process... Better track debt in a TODO document, or a proper task tracker ? The regular document was my first thought too, but then if we want to create a report on the active TDs, or automate a little the design note creation, mangle and analyze them a little, we will need to build proper tooling from scratch. Also most of this work would probably deprecate the known issue field in the release note. Any new tool that would need to be created, I still imagine it to be pretty similar to reno, at least regarding the process of adding something new: One file per note, together with the code, created from a template, able to be modified over time, and a command to create a report From gkotton at vmware.com Mon Feb 12 12:12:35 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 12 Feb 2018 12:12:35 +0000 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: <27b4cffd-8ffb-3afc-7ae5-8c8b6854c31e@suse.com> References: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> <27b4cffd-8ffb-3afc-7ae5-8c8b6854c31e@suse.com> Message-ID: Hi, I have a number of issues with this: 1. I do not think that we should mark this as deprecated until we have a clear and working migration patch. Let me give an example. Say I have a user who is using Pike or Queens and has N LBaaS load balancers up and running. What if we upgrade to T and there is no LBaaS, only Octavia. What is the migration path here? Maybe I have missed this and would be happy to learn how this was done. 2. I think that none of the load balancing vendors have code in Octavia and this may be a problem (somewhat related to #1). I guess that there is enough warning but this is still concerning 3. The migration from V1 to V2 was not successful. So, I have some concerns about going to a new service completely. I prefer that we hold off on this until there is a clear picture. Thanks Gary On 2/1/18, 9:22 AM, "Andreas Jaeger" wrote: On 2018-01-31 22:58, Akihiro Motoki wrote: > I don't think we need to drop translation support NOW (at least for > neutron-lbaas-dashboard). > There might be fixes which affects translation and/or there might be > translation improvements. > I don't think a deprecation means no translation fix any more. It > sounds too aggressive. > Is there any problem to keep translations for them? Reading the whole FAQ - since bug fixes are planned, translations can merge back. So, indeed we can keep translation infrastructure set up. I recommend to translators to remove neutron-lbaas-dashboard from the priority list, Andreas > Akihiro > > 2018-02-01 3:28 GMT+09:00 Andreas Jaeger : >> In that case, I suggest to remove translation jobs for these repositories, >> >> Andreas >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gr at ham.ie Mon Feb 12 13:50:34 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 12 Feb 2018 13:50:34 +0000 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: Message-ID: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> On 12/02/18 04:18, Hongbin Lu wrote: > Hi all, > > I was working on the OSC plugin of my project and trying to choose a CLI > option to represent the availability zone of the container. When I came > across the existing commands, I saw some inconsistencies on the naming. > Some commands use the syntax '--zone ', while others use the syntax > '--availability-zone '. For example: > > * openstack host list ... [--zone ] > * openstack aggregate create ... [--zone ] > * openstack volume create ... [--availability-zone ] > * openstack consistency group create ... [--availability-zone > ] > > I wonder if it makes sense to address this inconsistency. Is it possible > have all commands using one syntax? > > Best regards, > Hongbin > Please please move to `availability-zone` - zone is a DNS zone (seen as Keystone took Domain :) ) within OSC. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From eng.szaher at gmail.com Mon Feb 12 14:05:27 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Mon, 12 Feb 2018 14:05:27 +0000 Subject: [openstack-dev] [freezer] PTG planning Etherpad Message-ID: Hello everyone, Please, if anyone is going to attend the next PTG in dublin check ehterpad [1] for discussion agenda. Feel free to add or comment on topics you want to discuss in this PTG. Please make sure to add your irc or name to participants section. Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Feb 12 14:35:59 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 12 Feb 2018 22:35:59 +0800 Subject: [openstack-dev] [heat] No Meeting This week Message-ID: Hi all Good news first! We released queens last week, so well done everyone. This week is Chinese new year, and Wednesday happen to be new year eve, so I will not hosting the meeting this week. Let's skip this one if no important stuff to talk about. See you at next meeting -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Feb 12 14:44:46 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 12 Feb 2018 08:44:46 -0600 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: Message-ID: On Sun, Feb 11, 2018 at 10:18 PM, Hongbin Lu wrote: > I was working on the OSC plugin of my project and trying to choose a CLI > option to represent the availability zone of the container. When I came > across the existing commands, I saw some inconsistencies on the naming. Some > commands use the syntax '--zone ', while others use the syntax > '--availability-zone '. For example: > > * openstack host list ... [--zone ] > * openstack aggregate create ... [--zone ] These likely date back to the original command mapping I did and in retrospect should have been --availability-zone. However they have been there since day 1 or 2. > * openstack volume create ... [--availability-zone ] > * openstack consistency group create ... [--availability-zone > ] > > I wonder if it makes sense to address this inconsistency. Is it possible > have all commands using one syntax? This is the sort of thing that should be addressed in the long-overdue OSC 4 release where we will make small breaking changes like this. Of course, the old option will be properly deprecated and silently supported for some time. dt -- Dean Troyer dtroyer at gmail.com From aheczko at mirantis.com Mon Feb 12 14:51:18 2018 From: aheczko at mirantis.com (Adam Heczko) Date: Mon, 12 Feb 2018 15:51:18 +0100 Subject: [openstack-dev] [freezer] PTG planning Etherpad In-Reply-To: References: Message-ID: Hello Saad, I think you missed link to the [1] etherpad. On Mon, Feb 12, 2018 at 3:05 PM, Saad Zaher wrote: > Hello everyone, > > Please, if anyone is going to attend the next PTG in dublin check ehterpad > [1] for discussion agenda. > > Feel free to add or comment on topics you want to discuss in this PTG. > > Please make sure to add your irc or name to participants section. > > > Best Regards, > Saad! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Adam Heczko Security Engineer @ Mirantis Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Feb 12 14:51:33 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 12 Feb 2018 08:51:33 -0600 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> Message-ID: On Mon, Feb 12, 2018 at 7:50 AM, Graham Hayes wrote: > Please please move to `availability-zone` - zone is a DNS zone (seen as > Keystone took Domain :) ) within OSC. As stated in another message, changing the Compute usage of --zone makes sense for OSC 4. Two additional things here: * Command option names have a lesser bar to clear (compared to resource names which must be unique) for uniqueness, as they are by definition context-sensitive. Like trademarks, the primary objective is to reduce user confusion. * --zone is really generic and I would suggest that DNS should also be using something to qualify it. The use of --zone in the Compute commands pre-dates the existence of Designate by at least a coupe of years. Also, the Network commands use "--dns-*" to refer to anything specifically DNS related, so for consistency, "--dns-zone" is a better fit. dt -- Dean Troyer dtroyer at gmail.com From gr at ham.ie Mon Feb 12 15:13:05 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 12 Feb 2018 15:13:05 +0000 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> Message-ID: <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> On 12/02/18 14:51, Dean Troyer wrote: > On Mon, Feb 12, 2018 at 7:50 AM, Graham Hayes wrote: >> Please please move to `availability-zone` - zone is a DNS zone (seen as >> Keystone took Domain :) ) within OSC. > > As stated in another message, changing the Compute usage of --zone > makes sense for OSC 4. Two additional things here: > > * Command option names have a lesser bar to clear (compared to > resource names which must be unique) for uniqueness, as they are by > definition context-sensitive. Like trademarks, the primary objective > is to reduce user confusion. > > * --zone is really generic and I would suggest that DNS should also be > using something to qualify it. The use of --zone in the Compute > commands pre-dates the existence of Designate by at least a coupe of > years. OSC only predates Designate by 5 months ... > Also, the Network commands use "--dns-*" to refer to anything > specifically DNS related, so for consistency, "--dns-zone" is a better > fit. "Zone" was what we were recommend to use by the OSC devs at the time we wrote our OSC plugin, and at the time we were also *not* supposed to name space commands inside service parent (e.g. openstack zone create vs openstack dns zone create). For command flags --dns-zone seems like a good idea - but having a plain --zone is confusing when we have a top level "zone" object in the CLI, when the type of object that "--zone" refers to is different to "openstack zone " - Graham > > dt > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From zhipengh512 at gmail.com Mon Feb 12 15:13:01 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 12 Feb 2018 23:13:01 +0800 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion Message-ID: Hi Nova team, Cyborg will have ptg sessions on Mon and Tue from 2:00pm to 6:00pm, and we would love to invite any of you guys who is interested in nova-cyborg interaction to join the discussion. The discussion will mainly focus on: (1) Cyborg team recap on the resource provider features that are implemented in Queens. (2) Joint discussion on what will be the impact on Nova side and future collaboration areas. The session is planned for 40 mins long. If you are interested plz feedback which date best suit for your arrangement so that we could arrange the topic accordingly :) Thank you very much. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Mon Feb 12 15:14:40 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Mon, 12 Feb 2018 16:14:40 +0100 Subject: [openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator In-Reply-To: References: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> Message-ID: On Mon, Feb 12, 2018 at 7:44 AM, Goutham Pratapa wrote: > > OUR USE-CASES QUOTA-MANAGEMENT: > > 1. Admin must have a global view of all quotas to all tenants across all the > regions > 2. Admin can periodically balance the quotas (we have a formula using which > we do this balancing ) across regions > 3. Admin can update, Delete quotas for tenants > 4. Admin can sync quotas for all tenants so that the quotas will be updated > in all regions. Global quota management is something we're seeking to solve in keystone[1][2][3][4], which would enable admins to do 1, 3, and 4 via keystone (though admittedly this is a few cycles out). We expect to dive into this at the PTG if you'd like to help shape this work. [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-limits.html [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html [3] https://review.openstack.org/#/c/441203/ [4] https://review.openstack.org/#/c/540803/ Colleen From dtroyer at gmail.com Mon Feb 12 15:24:05 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 12 Feb 2018 09:24:05 -0600 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> Message-ID: On Mon, Feb 12, 2018 at 9:13 AM, Graham Hayes wrote: > OSC only predates Designate by 5 months ... My bad, I didn't check dates. > "Zone" was what we were recommend to use by the OSC devs at the time we > wrote our OSC plugin, and at the time we were also *not* supposed to > name space commands inside service parent (e.g. openstack zone create vs > openstack dns zone create). Namespacing commands and naming options are totally separate things. It is likely I suggested --zone at the time, and in the context of DNS commands it is very clear. Also, in the context of Compute commands, --zone meaning availability zone is also clear. > For command flags --dns-zone seems like a good idea - but having a plain > --zone is confusing when we have a top level "zone" object in the CLI, > when the type of object that "--zone" refers to is different to > "openstack zone " Again, if there is confusion, things should be more specifically named to remove the confusion. Maybe allowing "zone" to be assumed to be a DNS zone was a mistake, I've made plenty of those in OSC already, so there is precedent, but it seemed reasonable at the time and we (OSC team) do not control what external plugins do. For example, I really resist using abbreviations in OSC, but in some places to not do so is to buck trends that any semi-experienced user in the field would expect. The last discussion of this was last week regarding "MTU" in Network commands. These are not hard rules, but strong guidelines that can and should be interpreted in the context that they will be applied. And in the end, the result should be one that is understandable, clear and even expected by the users. dt -- Dean Troyer dtroyer at gmail.com From openstack at fried.cc Mon Feb 12 15:27:42 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 12 Feb 2018 09:27:42 -0600 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: References: Message-ID: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> I'm interested. No date/time preference so far as long as it sticks to Monday/Tuesday. efried On 02/12/2018 09:13 AM, Zhipeng Huang wrote: > Hi Nova team, > > Cyborg will have ptg sessions on Mon and Tue from 2:00pm to 6:00pm, and > we would love to invite any of you guys who is interested in nova-cyborg > interaction to join the discussion. The discussion will mainly focus on: > > (1) Cyborg team recap on the resource provider features that are > implemented in Queens. > (2) Joint discussion on what will be the impact on Nova side and future > collaboration areas. > > The session is planned for 40 mins long. > > If you are interested plz feedback which date best suit for your > arrangement so that we could arrange the topic accordingly :) > > Thank you very much. > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thingee at gmail.com Mon Feb 12 15:36:34 2018 From: thingee at gmail.com (Mike Perez) Date: Tue, 13 Feb 2018 02:36:34 +1100 Subject: [openstack-dev] [ptg] Lightning talks In-Reply-To: <20180208002535.GA14568@gmail.com> References: <20180208002535.GA14568@gmail.com> Message-ID: <20180212153634.GG14568@gmail.com> On 11:25 Feb 08, Mike Perez wrote: > Hey all! > > I'm looking for six 5-minute lightning talks for the PTG in Dublin. This will > be on Friday March 2nd at 13:00-13:30 local time. > > Appropriate 5 minute talk examples: > * Neat features in libraries like oslo that we should consider adopting in our > community wide goals. > * Features and tricks in your favorite editor that makes doing work easier. > * Infra tools that maybe not a lot of people know about yet. Zuul v3 explained > in five minutes anyone? > * Some potential API specification from the API SIG that we should adopt as > a community wide goal. > > Please email me DIRECTLY the following information: > > Title: > Speaker(s) full name: > Abstract: > Link to presentation or attachment if you have it already. Laptop on stage will > be loaded with your presentation already. I'll have open office available so > odp, odg, otp, pdf, limited ppt format support. > > Submission deadline is February 16 00:00 UTC, and then I'll send confirmation > emails to speakers requesting for slides. Thank you, looking forward to hearing > some great talks from our community! Hey all, Just a reminder that lightning talk proposals for the PTG in Dublin is due February 16 at 00:00 utc. We're building up a nice line up already. Details quoted above, Thanks! -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From jaypipes at gmail.com Mon Feb 12 15:54:16 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 12 Feb 2018 10:54:16 -0500 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> Message-ID: <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> On 02/12/2018 10:27 AM, Eric Fried wrote: > I'm interested. No date/time preference so far as long as it sticks to > Monday/Tuesday. Same for me. -jay From thingee at gmail.com Mon Feb 12 15:55:14 2018 From: thingee at gmail.com (Mike Perez) Date: Tue, 13 Feb 2018 02:55:14 +1100 Subject: [openstack-dev] Feedback on the Dev Digest Message-ID: <20180212155514.GH14568@gmail.com> Hey all, I setup a two question survey asking about your frequency with the Dev Digest, and how it can be improved: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback In case you're not familiar, the Dev Digest tries to provide summaries of the OpenStack Dev mailing list, for people who might not have time to read every message and thread on the list. The hope is for people to be informed on discussions they would've otherwise missed, and be able to get caught up to chime in if necessary. This is a community effort worked on via etherpad: https://etherpad.openstack.org/p/devdigest The content on Fridays is posted to the Dev list in plaintext, LWN, Twitter and the OpenStack blog: https://www.openstack.org/blog/ Thank you! -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From mriedemos at gmail.com Mon Feb 12 15:55:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 12 Feb 2018 09:55:29 -0600 Subject: [openstack-dev] [nova] Reminder about stable/queens backports Message-ID: I'm going through the proposed stable/queens backports and marking them as -Workflow if they are not fixing a regression introduced in queens itself or required for a queens-rc2 tag. If we have a need for a queens-rc2 tag then we can assess if any of these other backports should be included, otherwise they'll go into the first release after the queens GA. -- Thanks, Matt From cdent+os at anticdent.org Mon Feb 12 15:57:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 12 Feb 2018 15:57:05 +0000 (GMT) Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> Message-ID: On Mon, 12 Feb 2018, Jay Pipes wrote: > On 02/12/2018 10:27 AM, Eric Fried wrote: >> I'm interested. No date/time preference so far as long as it sticks to >> Monday/Tuesday. > > Same for me. Tuesday would be best for me as Monday is api-sig day. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From edmondsw at us.ibm.com Mon Feb 12 16:04:32 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 12 Feb 2018 11:04:32 -0500 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> Message-ID: keystone may have taken "domain", but it didn't take "dns-domain" Dean Troyer wrote on 02/12/2018 10:24:05 AM: > > On Mon, Feb 12, 2018 at 9:13 AM, Graham Hayes wrote: > > OSC only predates Designate by 5 months ... > > My bad, I didn't check dates. > > > "Zone" was what we were recommend to use by the OSC devs at the time we > > wrote our OSC plugin, and at the time we were also *not* supposed to > > name space commands inside service parent (e.g. openstack zone create vs > > openstack dns zone create). > > Namespacing commands and naming options are totally separate things. > It is likely I suggested --zone at the time, and in the context of DNS > commands it is very clear. Also, in the context of Compute commands, > --zone meaning availability zone is also clear. > > > For command flags --dns-zone seems like a good idea - but having a plain > > --zone is confusing when we have a top level "zone" object in the CLI, > > when the type of object that "--zone" refers to is different to > > "openstack zone " > > Again, if there is confusion, things should be more specifically named > to remove the confusion. Maybe allowing "zone" to be assumed to be a > DNS zone was a mistake, I've made plenty of those in OSC already, so > there is precedent, but it seemed reasonable at the time and we (OSC > team) do not control what external plugins do. > > For example, I really resist using abbreviations in OSC, but in some > places to not do so is to buck trends that any semi-experienced user > in the field would expect. The last discussion of this was last week > regarding "MTU" in Network commands. > > These are not hard rules, but strong guidelines that can and should be > interpreted in the context that they will be applied. And in the end, > the result should be one that is understandable, clear and even > expected by the users. > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url? > u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=Fr9TF_mDZVJgACWKoyXcnphs-6rMDWufyRhpQEtUask&s=m5wXNx8okCgs7CbNoMhHEQev0xJCFIq61pcmnWBugSs&e= > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Mon Feb 12 16:17:45 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 12 Feb 2018 16:17:45 +0000 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> Message-ID: On 12/02/18 16:04, William M Edmonds wrote: > keystone may have taken "domain", but it didn't take "dns-domain" No, but the advice at the time was to move to zone, and match DNS RFCs, and not namespace objects with the service type. We moved from "domain" -> "zone" and "records" -> "recordsets" in both the CLI and in our V2 API (in Aug 2013, so the time for change has long passed). The point of my initial email was that if we were moving some of the inconsistent naming for availability zones to something, that moving it to "availability-zone" would be better than "zone". I think that point has been made, so lets leave it at that. - Graham > > Dean Troyer wrote on 02/12/2018 10:24:05 AM: >> >> On Mon, Feb 12, 2018 at 9:13 AM, Graham Hayes wrote: >> > OSC only predates Designate by 5 months ... >> >> My bad, I didn't check dates. >> >> > "Zone" was what we were recommend to use by the OSC devs at the time we >> > wrote our OSC plugin, and at the time we were also *not* supposed to >> > name space commands inside service parent (e.g. openstack zone create vs >> > openstack dns zone create). >> >> Namespacing commands and naming options are totally separate things. >> It is likely I suggested --zone at the time, and in the context of DNS >> commands it is very clear.  Also, in the context of Compute commands, >> --zone meaning availability zone is also clear. >> >> > For command flags --dns-zone seems like a good idea - but having a plain >> > --zone is confusing when we have a top level "zone" object in the CLI, >> > when the type of object that "--zone" refers to is different to >> > "openstack zone " >> >> Again, if there is confusion, things should be more specifically named >> to remove the confusion.  Maybe allowing "zone" to be assumed to be a >> DNS zone was a mistake, I've made plenty of those in OSC already, so >> there is precedent, but it seemed reasonable at the time and we (OSC >> team) do not control what external plugins do. >> >> For example, I really resist using abbreviations in OSC, but in some >> places to not do so is to buck trends that any semi-experienced user >> in the field would expect.  The last discussion of this was last week >> regarding "MTU" in Network commands. >> >> These are not hard rules, but strong guidelines that can and should be >> interpreted in the context that they will be applied.  And in the end, >> the result should be one that is understandable, clear and even >> expected by the users. >> >> dt >> >> -- >> >> Dean Troyer >> dtroyer at gmail.com >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> https://urldefense.proofpoint.com/v2/url? >> > u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx- >> > siA1ZOg&r=uPMq7DJxi29v-9CkM5RT0pxLlwteWvldJgmFhLURdvg&m=Fr9TF_mDZVJgACWKoyXcnphs-6rMDWufyRhpQEtUask&s=m5wXNx8okCgs7CbNoMhHEQev0xJCFIq61pcmnWBugSs&e= >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From prometheanfire at gentoo.org Mon Feb 12 16:25:44 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 12 Feb 2018 10:25:44 -0600 Subject: [openstack-dev] [requirements] we are now unfrozen and branched Message-ID: <20180212162544.ws7u2nwlnrfltr3s@gentoo.org> This means we are back to business as usual. cycle trailing projects have been warned not to merge requirements updates until they branch or get an ack from a requirements core. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From shiina.hironori at jp.fujitsu.com Mon Feb 12 16:41:48 2018 From: shiina.hironori at jp.fujitsu.com (Shiina, Hironori) Date: Mon, 12 Feb 2018 16:41:48 +0000 Subject: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core Message-ID: Thank you, everyone! I'm glad to join the team. Thanks, Hironori ________________________________________ 差出人: Julia Kreger [juliaashleykreger at gmail.com] 送信日時: 2018年2月10日 0:22 宛先: OpenStack Development Mailing List (not for usage questions) 件名: Re: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core Since all of our ironic cores have replied and nobody has stated any objections, I guess it is time to welcome Hironori to the team! I will make the changes in gerrit after coffee. Thanks everyone! -Julia On Fri, Feb 9, 2018 at 7:13 AM, Sam Betts (sambetts) wrote: > +1 > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ryan.beisner at canonical.com Mon Feb 12 17:00:03 2018 From: ryan.beisner at canonical.com (Ryan Beisner) Date: Mon, 12 Feb 2018 11:00:03 -0600 Subject: [openstack-dev] [charms] Propose Andrew McLeod for OpenStack Charmers team Message-ID: Hi All, I'd like to propose Andrew McLeod for the OpenStack Charmers (LP) and charms-core (Gerrit) teams. Andrew has made many commits and bugfixes to the OpenStack Charms over the past couple of years, and he has general charming knowledge and experience which is wider than just OpenStack. He has actively participated in the last two OpenStack Charms release processes. He is also the original author and current maintainer of the magpie charm. Cheers, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Feb 12 17:11:45 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 12 Feb 2018 18:11:45 +0100 Subject: [openstack-dev] [nova] Notification update week 7 Message-ID: <1518455505.18558.2@smtp.office365.com> Hi, Here is the status update / focus settings mail for w7. Bugs ---- No new bugs. No change from last week's bug status. Versioned notification transformation ------------------------------------- The rocky bp has been created https://blueprints.launchpad.net/nova/+spec/versioned-notification-transformation-rocky Every open patch needs to be reproposed to this bp as soon as master opens for Rocky. Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances was shortly discussed on the last nova weekly meeting and approved. Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications was shortly discussed on the last nova weekyl meeting, there was no objection but it still pending approval. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open No open patches. We can expect some as soon as master opens for Rocky. Weekly meeting -------------- The next three meetings are cancelled. The next meeting will be help after the PTG. Cheers, gibi From alex.kavanagh at canonical.com Mon Feb 12 17:38:07 2018 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Mon, 12 Feb 2018 17:38:07 +0000 Subject: [openstack-dev] [charms] Propose Andrew McLeod for OpenStack Charmers team In-Reply-To: References: Message-ID: Positive +1 from me. Andrew would make a great additiona. On Mon, Feb 12, 2018 at 5:00 PM, Ryan Beisner wrote: > Hi All, > > I'd like to propose Andrew McLeod for the OpenStack Charmers (LP) and > charms-core (Gerrit) teams. Andrew has made many commits and bugfixes to > the OpenStack Charms over the past couple of years, and he has general > charming knowledge and experience which is wider than just OpenStack. > > He has actively participated in the last two OpenStack Charms release > processes. He is also the original author and current maintainer of the > magpie charm. > > Cheers, > > Ryan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Alex Kavanagh - Software Engineer Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Mon Feb 12 17:43:59 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 12 Feb 2018 11:43:59 -0600 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> Message-ID: <99F0F3E9-58B3-41B5-924E-F66002AFC1D1@leafe.com> On Feb 12, 2018, at 9:57 AM, Chris Dent wrote: > > Tuesday would be best for me as Monday is api-sig day. Same, for the same reason. -- Ed Leafe From sean.mcginnis at gmx.com Mon Feb 12 17:57:16 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 12 Feb 2018 11:57:16 -0600 Subject: [openstack-dev] [mistral][release] Release of openstack/mistral-extra failed Message-ID: <20180212175716.GA5777@sm-xps> Hey Mistral team, We had a release job failure for mistral-extra. The issue was caused by something that has already been fixed in master (and stable/queens). The root cause of the problem is the way that tox is using constraints. This caused an issue during an attempt to create a source distribution that calls a command similar to "tox -e venv -vv -- python setup.py sdist", which then fails pip install due to a constraint being passed in to a local path install. I have proposed a backport from the fix to master with this: https://review.openstack.org/543563 Unfortunately, we've now tagged the 5.2.1 release in git, but we are not able to publish the release artifacts for it. We will need the above patch to land in stable/pike, then a new 5.2.2 release proposed to get that published. Please let me know if you have any questions about this. Thanks! Sean ----- Forwarded message from zuul at openstack.org ----- Date: Mon, 12 Feb 2018 13:55:35 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Release of openstack/mistral-extra failed Reply-To: openstack-dev at lists.openstack.org Build failed. - release-openstack-python http://logs.openstack.org/13/13fec4048a3c57d77307f4384b26788227179113/release/release-openstack-python/1dc8c7a/ : FAILURE in 4m 45s - announce-release announce-release : SKIPPED - propose-update-constraints propose-update-constraints : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From ramamani.yeleswarapu at intel.com Mon Feb 12 18:34:12 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 12 Feb 2018 18:34:12 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Fix the multitenant grenade - https://bugs.launchpad.net/ironic/+bug/1744139 - Add tempest job for ironic queens branch https://review.openstack.org/543555 - CI and docs work for classic drivers deprecation (see status below) - Required Backports/Nice to haves below - CRITICAL bugs (must be fixed and backported to queens before the release) - ironic-inspector: rare crash when ironic port list returns HTTP 400 https://bugs.launchpad.net/ironic-inspector/+bug/1748893 - the actual bug is that ironic returns 400 on port.list when node deletion races with it - ironic-inspector: broken noauth mode: https://bugs.launchpad.net/ironic-inspector/+bug/1748263 - Fix as many bugs as possible Required Queens Backports ------------------------- - Traits instance_info validation - https://review.openstack.org/#/c/543461/ - mgoddard says it is a nice to have - Switch to hardware types - https://review.openstack.org/#/c/537959/ Nice to have backports ---------------------- - Ansible docs - https://review.openstack.org/#/c/525501/ - inspector: do not try passing non-MACs as switch_id: https://review.openstack.org/542214 Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: - initial release note https://review.openstack.org/#/c/534201/ MERGED sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 5 Feb 2018 and 12 Feb 2018) - Ironic: 209 bugs (-13) + 247 wishlist items. 2 new (+1), 157 in progress (-4), 1 critical, 29 high (-5) and 20 incomplete (-5) - Inspector: 17 bugs (+3) + 25 wishlist items. 0 new, 14 in progress (+2), 2 critical (+2), 3 high (+1) and 4 incomplete - Nova bugs with Ironic tag: 14. 1 new, 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. - If provisioning network is changed, Ironic conductor does not behave correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor works correctly on changes of networks: https://review.openstack.org/#/c/462931/ - (rloo) needs some direction - may be fixed as part of https://review.openstack.org/#/c/460564/ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 12 Feb 2017: - TODO: - API-SIG guideline on consuming versions in SDKs https://review.openstack.org/532814 on review - establish foundation for using version negotiation in nova - nothing more for Queens. Stay tuned... - need to make sure that we discuss/agree with nova about how to do this Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 12 Feb 2017: - dev documentation for hardware types: https://review.openstack.org/537959 - switch documentation to hardware types: - install and admin guides done - need help from vendors updating their pages! - api-ref examples: TODO - migration of classic drivers to hardware types: done - migration of CI to hardware types - ironic and inspector: done - IPA: TODO - ironic-lib: TODO? - python-ironicclient: TODO? - python-ironic-inspector-client: TODO? - virtualbmc: TODO? - bifrost: https://review.openstack.org/#/c/540153/ Merged - actual deprecation: done Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - status as of 12 Feb 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - traits API: - need to validate node's instance_info['traits'] at deploy time (https://bugs.launchpad.net/ironic/+bug/1722194/comments/31) - https://review.openstack.org/#/c/543461 - will need to backport this to stable/queens Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 12 Feb 2017: - dtantsur is returning to this after the release - list of cases from the PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ MERGED - Devstack changes to enable testing add support for rescue mode: https://review.openstack.org/#/c/524118/ - We need to be careful with this, in that we can't use python-ironicclient changes that have not been released. - Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Can't Merge until we do a client release with rescue support (in Rocky): - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 5 Feb 2017: - code merged, CI coverage via the standalone job - docs: https://review.openstack.org/#/c/525501/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete.vandergiessen at canonical.com Mon Feb 12 19:04:36 2018 From: pete.vandergiessen at canonical.com (Pete Vander Giessen) Date: Mon, 12 Feb 2018 19:04:36 +0000 Subject: [openstack-dev] [charms] Propose Andrew McLeod for OpenStack Charmers team In-Reply-To: References: Message-ID: +1 from me, too. On Mon, Feb 12, 2018 at 12:39 PM Alex Kavanagh wrote: > Positive +1 from me. Andrew would make a great additiona. > > On Mon, Feb 12, 2018 at 5:00 PM, Ryan Beisner > wrote: > >> Hi All, >> >> I'd like to propose Andrew McLeod for the OpenStack Charmers (LP) and >> charms-core (Gerrit) teams. Andrew has made many commits and bugfixes to >> the OpenStack Charms over the past couple of years, and he has general >> charming knowledge and experience which is wider than just OpenStack. >> >> He has actively participated in the last two OpenStack Charms release >> processes. He is also the original author and current maintainer of the >> magpie charm. >> >> Cheers, >> >> Ryan >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Alex Kavanagh - Software Engineer > Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Feb 12 19:36:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 12 Feb 2018 13:36:21 -0600 Subject: [openstack-dev] [keystone] Queens backports Message-ID: Hey all, Now that we have a stable/queens branch, I've created a "queens-backport-potential" bug tag. Feel free to use this if you triage a bug that needs to be backported. Thanks, Lance -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Feb 12 19:47:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 12 Feb 2018 13:47:48 -0600 Subject: [openstack-dev] [nova] Notification update week 7 In-Reply-To: <1518455505.18558.2@smtp.office365.com> References: <1518455505.18558.2@smtp.office365.com> Message-ID: <25f4b50c-3f8e-7dc1-8d8a-074caff4adba@gmail.com> On 2/12/2018 11:11 AM, Balázs Gibizer wrote: > Add the user id and project id of the user initiated the instance > action to the notification > ----------------------------------------------------------------- > The bp > https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications > was shortly discussed on the last nova weekyl meeting, there was no > objection but it still pending approval. This is approved now. We agreed to approve this in the the Feb 8 meeting, I just forgot to do it. -- Thanks, Matt From edmondsw at us.ibm.com Mon Feb 12 21:07:52 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 12 Feb 2018 16:07:52 -0500 Subject: [openstack-dev] [osc][python-openstackclient] Consistency of option name In-Reply-To: References: <7af959ac-0463-e62a-83fd-8d7fc1d7d2ef@ham.ie> <1328ce1c-d725-f9f4-0987-15064740aef4@ham.ie> Message-ID: Graham Hayes wrote on 02/12/2018 11:17:45 AM: > On 12/02/18 16:04, William M Edmonds wrote: > > keystone may have taken "domain", but it didn't take "dns-domain" > > No, but the advice at the time was to move to zone, and match DNS > RFCs, and not namespace objects with the service type. > I wasn't trying to criticize or question history but rather to look forward. IF we change the name, "dns-domain" could be an option. That is all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edmondsw at us.ibm.com Mon Feb 12 21:13:30 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Mon, 12 Feb 2018 16:13:30 -0500 Subject: [openstack-dev] [requirements] we are now unfrozen and branched In-Reply-To: <20180212162544.ws7u2nwlnrfltr3s@gentoo.org> References: <20180212162544.ws7u2nwlnrfltr3s@gentoo.org> Message-ID: I'm not seeing a stable/queens branch for openstack/requirements yet. Is that not what you meant? When is that projected? Matthew Thode wrote on 02/12/2018 11:25:44 AM: > This means we are back to business as usual. > > cycle trailing projects have been warned not to merge requirements > updates until they branch or get an ack from a requirements core. > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Mon Feb 12 21:14:28 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 12 Feb 2018 16:14:28 -0500 Subject: [openstack-dev] [ironic] team dinner at Dublin PTG? In-Reply-To: <0363716E-BD26-4C72-900C-6B411B211C72@intel.com> References: <0363716E-BD26-4C72-900C-6B411B211C72@intel.com> Message-ID: On Mon, Feb 5, 2018 at 5:42 PM, Loo, Ruby wrote: > Hi ironic-ers, > > Planning for the Dublin PTG has started. And what's the most important > thing (and most fun event) to plan for? You got it, the team dinner! We'd > like to get an idea of who is interested and what evening works for all or > most of us. > > Please indicate which evenings you are available, at this doodle: > https://doodle.com/poll/d4ff6m9hxg887n9q > > If you're shy or don't want to use doodle, send me an email. > > Please respond by Friday, Feb 16 (same deadline as PTG > topics-for-discussion), so we can find a place and reserve it. > > Thanks! > --ruby > > Reminder to doodle [1] if you haven't done so already. Also, because we're all so keen to know when and where we're going, we want to decide sooner, so please respond by tomorrow (Tues, Feb 13). Thanks! --ruby [1] https://doodle.com/poll/d4ff6m9hxg887n9q -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Feb 12 21:36:16 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 12 Feb 2018 15:36:16 -0600 Subject: [openstack-dev] [requirements] we are now unfrozen and branched In-Reply-To: References: <20180212162544.ws7u2nwlnrfltr3s@gentoo.org> Message-ID: <20180212213616.3gtxqu4ncgz2vpn7@gentoo.org> On 18-02-12 16:13:30, William M Edmonds wrote: > I'm not seeing a stable/queens branch for openstack/requirements yet. Is > that not what you meant? When is that projected? > > Matthew Thode wrote on 02/12/2018 11:25:44 AM: > > This means we are back to business as usual. > > > > cycle trailing projects have been warned not to merge requirements > > updates until they branch or get an ack from a requirements core. > > It's done now, gate problems delayed it. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From miguel at mlavalle.com Mon Feb 12 22:35:33 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 12 Feb 2018 16:35:33 -0600 Subject: [openstack-dev] Lunchtime during PTG Message-ID: Hi, In order to schedule team sessions during the PTG I would like to know the time lunch is going to be served. It is not in the schedule: https://www.openstack.org/ptg/#tab_schedule Cheers Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Feb 12 22:46:14 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 12 Feb 2018 14:46:14 -0800 Subject: [openstack-dev] [neutron][lbaas][neutron-lbaas][octavia] Announcing the deprecation of neutron-lbaas and neutron-lbaas-dashboard In-Reply-To: References: <08d47fce-1fb0-0fa3-9ca7-cea25da60e3c@suse.com> <27b4cffd-8ffb-3afc-7ae5-8c8b6854c31e@suse.com> Message-ID: Hi Gary, All of the answers to your questions are on the FAQ linked in the announcement. 1: If you are already using the Octavia driver or the neutron-lbaas proxy driver, you are already migrated. We will provide a port migration tool to migrate the neutron port ownership from neutron-lbaas if neutron-lbaas was configured to create your VIP ports for the Octavia driver. For other drivers we will provide a migration tool during Rocky. The databases are very similar so migrating from neutron-lbaas will be fairly straight forward. 2: You are correct that currently there are no vendor drivers for Octavia. As you probably know, the new and improved driver interface specification was merged during Queens (A representative from your employer contributed to the specification). We expect over the course of Rocky vendor drivers will become available. This is part of the motivation for announcing the start of deprecation. 3: This is in no way like the migration from LBaaS v1 to v2. The largest reason is the LBaaS v2 API is fully compatible with the Octavia API (it implements LBaaS v2). We did not change the model. The current load balancer team can't really talk to the choices made in the V1 to V2 API migrations. As I have mentioned in the FAQ and on your patch comments, neutron-lbaas will be maintained with bug fixes for the duration of the deprecation cycle, which will be a minimum of two OpenStack releases. I am sorry you did not get to participate in the discussions and vote for the start of the deprecation cycle. We announced that we were going to work towards this a year and a half ago in a specification approved by the neutron cores: http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html It has been a major topic at our PTG sessions, summits, and weekly meetings since then. The team vote that we are ready to announce was unanimous at our 1/24/2018 IRC meeting. I hope you will join us going forward to make this deprecation a success. We meet weekly on IRC and will be at the PTG. Michael On Mon, Feb 12, 2018 at 4:12 AM, Gary Kotton wrote: > Hi, > I have a number of issues with this: > 1. I do not think that we should mark this as deprecated until we have a clear and working migration patch. Let me give an example. Say I have a user who is using Pike or Queens and has N LBaaS load balancers up and running. What if we upgrade to T and there is no LBaaS, only Octavia. What is the migration path here? Maybe I have missed this and would be happy to learn how this was done. > 2. I think that none of the load balancing vendors have code in Octavia and this may be a problem (somewhat related to #1). I guess that there is enough warning but this is still concerning > 3. The migration from V1 to V2 was not successful. So, I have some concerns about going to a new service completely. > I prefer that we hold off on this until there is a clear picture. > Thanks > Gary > > On 2/1/18, 9:22 AM, "Andreas Jaeger" wrote: > > On 2018-01-31 22:58, Akihiro Motoki wrote: > > I don't think we need to drop translation support NOW (at least for > > neutron-lbaas-dashboard). > > There might be fixes which affects translation and/or there might be > > translation improvements. > > I don't think a deprecation means no translation fix any more. It > > sounds too aggressive. > > Is there any problem to keep translations for them? > > Reading the whole FAQ - since bug fixes are planned, translations can > merge back. So, indeed we can keep translation infrastructure set up. > > I recommend to translators to remove neutron-lbaas-dashboard from the > priority list, > > Andreas > > > Akihiro > > > > 2018-02-01 3:28 GMT+09:00 Andreas Jaeger : > >> In that case, I suggest to remove translation jobs for these repositories, > >> > >> Andreas > >> -- > >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, > >> HRB 21284 (AG Nürnberg) > >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > >> > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ksnhr.tech at gmail.com Mon Feb 12 22:52:05 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Tue, 13 Feb 2018 07:52:05 +0900 Subject: [openstack-dev] [heat] No Meeting This week In-Reply-To: References: Message-ID: Hi Rico, Hope you and your family have good holidays:) I have one thing hopefully before your off. Looks heat-dashboard does not have stable/queens branch yet. I got an indication about this from Horizon core and they are waiting for that heat-dashboard will have it. (we want to drop django <= 1.10 support along with Horizon for Rocky) Cloud you kindly take care of this issue ? Your response will be highly appreciated. Regards, Kaz (kazsh) 2018-02-12 23:35 GMT+09:00 Rico Lin : > Hi all > Good news first! We released queens last week, so well done everyone. > > This week is Chinese new year, and Wednesday happen to be new year eve, so > I will not hosting the meeting this week. > > Let's skip this one if no important stuff to talk about. > > See you at next meeting > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Mon Feb 12 23:00:00 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 12 Feb 2018 23:00:00 -0000 Subject: [openstack-dev] [barbican] barbican 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for barbican for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/barbican/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/barbican/log/?h=stable/queens Release notes for barbican can be found at: http://docs.openstack.org/releasenotes/barbican/ From juliaashleykreger at gmail.com Mon Feb 12 23:20:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Feb 2018 15:20:13 -0800 Subject: [openstack-dev] [ironic] PTG Planning Etherpad Message-ID: Greetings fellow Ironic humanoids! We have had a planning etherpad [1] up for a couple weeks collecting ideas. If you plan on attending the Ironic sessions at the PTG, please post any additional ideas as well as feedback for the other ideas. We also need an idea of interest for the purposes of prioritization, so a +1 on items you feel is important will help us create an agenda. If you will not be attending the PTG and have an item that requires discussion, please feel free to add items and provide feedback. If there is a particular item that you feel needs the attention of the Ironic team, make sure that you provide additional context as to why an item is important, and any references that may help context. I will rank the items by priority, and generate a reasonable agenda from the ideas this coming Friday the 16th. Items added after 5 PM UTC on the Friday the 16th may not make the agenda. Thanks, -Julia [1] https://etherpad.openstack.org/p/ironic-rocky-ptg From stig.openstack at telfer.org Mon Feb 12 23:52:07 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Mon, 12 Feb 2018 23:52:07 +0000 Subject: [openstack-dev] [ironic][triploe] support for firmware update In-Reply-To: References: Message-ID: <81F9552C-298A-4394-835B-0641E2F4F4D9@telfer.org> Hi Moshe - It seems a bit risky to automatically apply firmware updates. For example, given a node will probably be rebooted for firmware updates to take effect, if other vendors also did this then perhaps the node could reboot unexpectedly in the middle of your update. In theory. The approach we’ve taken on handling firmware updates[1] has been to create a hardware manager for verifying firmware values during node cleaning and raising an exception if they do not match. The consequence is, nodes will drop into maintenance mode for manual inspection / intervention. We’ve then booted the node into a custom image to perform the update. Hope this helps, Stig [1] https://github.com/stackhpc/stackhpc-ipa-hardware-managers > On 8 Feb 2018, at 07:43, Moshe Levi wrote: > > Hi all, > > I saw that ironic-python-agent support custom hardware manager. > I would like to support firmware updates (In my case Mellanox nic) and I was wandering how custom hardware manager can be used in such case? > How it is integrated with ironic-python agent and also is there an integration to tripleO as well. > > The use case for use is just to make sure the correct firmware is installed on the nic and if not update it during the triple deployment. > > > > > [1] - https://docs.openstack.org/ironic-python-agent/pike/contributor/hardware_managers.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Tue Feb 13 00:06:08 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 13 Feb 2018 08:06:08 +0800 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: <99F0F3E9-58B3-41B5-924E-F66002AFC1D1@leafe.com> References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> <99F0F3E9-58B3-41B5-924E-F66002AFC1D1@leafe.com> Message-ID: Let's settle on Tuesday afternoon session then, thanks a lot :) On Tue, Feb 13, 2018 at 1:43 AM, Ed Leafe wrote: > On Feb 12, 2018, at 9:57 AM, Chris Dent wrote: > > > > Tuesday would be best for me as Monday is api-sig day. > > Same, for the same reason. > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 13 00:13:58 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 13 Feb 2018 00:13:58 +0000 Subject: [openstack-dev] Lunchtime during PTG In-Reply-To: References: Message-ID: He Miguel :) We have lunch scheduled from 12:30-1:30 with the last half hour being presentations while you eat. For topics, see this thread[1] Let me know if you have any other questions! -Kendall (diablo_rojo) [1] http://lists.openstack.org/pipermail/openstack-tc/2018-February/001492.html On Mon, Feb 12, 2018 at 2:35 PM Miguel Lavalle wrote: > Hi, > > In order to schedule team sessions during the PTG I would like to know the > time lunch is going to be served. It is not in the schedule: > https://www.openstack.org/ptg/#tab_schedule > > Cheers > > Miguel > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Feb 13 00:26:36 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 12 Feb 2018 16:26:36 -0800 Subject: [openstack-dev] [nova] github.com/openstack/nova-specs mirror temporarily broken Message-ID: Hey Stackers, This is just a heads up that the github mirror of the nova-specs repo has been temporarily broken for the past few months and doesn’t have the specs/rocky/ directory in it. Fixing it is on the TODO list for the next scheduled maintenance window, but until then, please git clone the https://git.openstack.org/cgit/openstack/nova-specs version to propose your specs [1] for Rocky. Cheers, -melanie [1] http://specs.openstack.org/openstack/nova-specs/readme.html From melwittt at gmail.com Tue Feb 13 00:32:49 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 12 Feb 2018 16:32:49 -0800 Subject: [openstack-dev] [nova] github.com/openstack/nova-specs mirror temporarily broken In-Reply-To: References: Message-ID: <3D08236B-F288-479E-A791-267AEEA29621@gmail.com> > On Feb 12, 2018, at 16:26, melanie witt wrote: > > Hey Stackers, > > This is just a heads up that the github mirror of the nova-specs repo has been temporarily broken for the past few months and doesn’t have the specs/rocky/ directory in it. Fixing it is on the TODO list for the next scheduled maintenance window, but until then, please git clone the https://git.openstack.org/cgit/openstack/nova-specs version to propose your specs [1] for Rocky. > > Cheers, > -melanie > > [1] http://specs.openstack.org/openstack/nova-specs/readme.html s/maintenance/gerrit maintenance/ From ekcs.openstack at gmail.com Tue Feb 13 01:00:21 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 12 Feb 2018 17:00:21 -0800 Subject: [openstack-dev] [monasca][congress] help configuring monasca for gate Message-ID: Hi Monasca folks, I'm trying to configure monasca in congress gate [1] and modeled it after this monasca playbook [2]. But I get: rsync: change_dir "/home/zuul/src/*/openstack/monasca-common" failed: No such file or directory (2) http://logs.openstack.org/22/530522/1/check/congress-devstack-api-mysql/166 d935/logs/devstack-gate-setup-workspace-new.txt.gz#_2017-12-30_01_53_41_607 Any hints on what I need to do differently? Thanks! [1] https://review.openstack.org/#/c/530522/ [2] https://github.com/openstack/monasca-api/blob/master/playbooks/legacy/monas ca-tempest-base/run.yaml From najoy at cisco.com Tue Feb 13 01:42:26 2018 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Tue, 13 Feb 2018 01:42:26 +0000 Subject: [openstack-dev] networking-vpp 18.01 for VPP 18.01 is now available Message-ID: <76037DE0-AD0F-460A-8186-832A9C0A7E59@cisco.com> Hello Everyone, In conjunction with the release of VPP 18.01, we'd like to invite you all to try out networking-vpp 18.01 for VPP 18.01. VPP is a fast userspace forwarder based on the DPDK toolkit, and uses vector packet processing algorithms to minimize the CPU time spent on each packet and maximize throughput. networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. This version has a few additional enhancements, along with supporting the VPP 18.01 APIs: - L3 HA - VM Live Migration - Neutron protocol names in a security group rule Along with this, there have been the usual upkeep as Neutron versions change, bug fixes, code and test improvements. The README [1] explains how you can try out VPP using devstack: the devstack plugin will deploy the mechanism driver and VPP itself and should give you a working system with a minimum of hassle. It will use the etcd version deployed by newer versions of devstack. We will be continuing our development between now and VPP's 18.04 release in April. There are several features we're planning to work on (you'll find a list in our RFE bugs at [2]), and we welcome anyone who would like to come help us. Everyone is welcome to join our biweekly IRC meetings, every other Monday (the next one is due in a week), 0800 PST = 1600 GMT. -- Naveen & Ian [1]https://github.com/openstack/networking-vpp/blob/master/README.rst [2]http://goo.gl/i3TzAt -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Tue Feb 13 01:54:14 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Tue, 13 Feb 2018 07:24:14 +0530 Subject: [openstack-dev] [all][Kingbird][Heat][Glance]Multi-Region Orchestrator In-Reply-To: References: <2500e357-23a3-2d53-0b5c-591dbd0d4cbb@redhat.com> Message-ID: Hi Colleen, Thanks for writing to us. Sure, we will definitely try to help as much as we can :). On Mon, Feb 12, 2018 at 8:44 PM, Colleen Murphy wrote: > On Mon, Feb 12, 2018 at 7:44 AM, Goutham Pratapa > wrote: > > > > > OUR USE-CASES QUOTA-MANAGEMENT: > > > > 1. Admin must have a global view of all quotas to all tenants across all > the > > regions > > 2. Admin can periodically balance the quotas (we have a formula using > which > > we do this balancing ) across regions > > 3. Admin can update, Delete quotas for tenants > > 4. Admin can sync quotas for all tenants so that the quotas will be > updated > > in all regions. > > Global quota management is something we're seeking to solve in > keystone[1][2][3][4], which would enable admins to do 1, 3, and 4 via > keystone (though admittedly this is a few cycles out). We expect to > dive into this at the PTG if you'd like to help shape this work. > > [1] http://specs.openstack.org/openstack/keystone-specs/ > specs/keystone/ongoing/unified-limits.html > [2] http://specs.openstack.org/openstack/keystone-specs/ > specs/keystone/queens/limits-api.html > [3] https://review.openstack.org/#/c/441203/ > [4] https://review.openstack.org/#/c/540803/ > > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Tue Feb 13 02:59:16 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 12 Feb 2018 18:59:16 -0800 Subject: [openstack-dev] [monasca][congress] help configuring monasca for gate In-Reply-To: References: Message-ID: Oops. Nevermind. Looks like it's working now. On 2/12/18, 5:00 PM, "Eric K" wrote: >Hi Monasca folks, >I'm trying to configure monasca in congress gate [1] and modeled it after >this monasca playbook [2]. But I get: >rsync: change_dir "/home/zuul/src/*/openstack/monasca-common" failed: No >such file or directory (2) > >http://logs.openstack.org/22/530522/1/check/congress-devstack-api-mysql/16 >6 >d935/logs/devstack-gate-setup-workspace-new.txt.gz#_2017-12-30_01_53_41_60 >7 > > >Any hints on what I need to do differently? Thanks! > >[1] https://review.openstack.org/#/c/530522/ >[2] >https://github.com/openstack/monasca-api/blob/master/playbooks/legacy/mona >s >ca-tempest-base/run.yaml > > From zhipengh512 at gmail.com Tue Feb 13 03:20:53 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 13 Feb 2018 11:20:53 +0800 Subject: [openstack-dev] [cyborg]Cyborg PTG Schedule Message-ID: Hi Team, After some discussion, we have tentatively setup our project specific schedule for Dublin PTG meeting. *Meetings:* In general, Cyborg project gathering will happen on Monday and Tuesday [0]. Due to several meeting conflicts, we divide our gathering meeting into two parts for each day: (1) Office Hour: 9:00am - 12:00pm. Cyborg core member will be around our meeting room to answer questions, devstack setup, and so forth. (2) Discussion Session: 2:00pm - 6:00pm. Rocky Cycle dev discussion. We will have ZOOM conference for each topic and thus each topic got 40 mins. *Team Photo:* Team photo is Tuesday 11:50 - 12:00 [1] *Team Interview:* Team Interview is Tuesday 13:00 [2] *Team Dinner:* We will have team dinner on Tuesday night, venue info to be updated later :) For specific topic arrangement , plze refer to [3] [0] https://www.openstack.org/ptg/#tab_schedule [1] https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing [2] https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 [3] https://etherpad.openstack.org/p/cyborg-ptg-rocky -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From slawek at kaplonski.pl Tue Feb 13 08:13:41 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Tue, 13 Feb 2018 09:13:41 +0100 Subject: [openstack-dev] [neutron] QoS IRC meeting Message-ID: <48A0A477-73CC-4ACA-BF04-8DE495B2BD8E@kaplonski.pl> Hi, I have to cancel todays Neutron QoS IRC meeting. If You have something related to QoS that You want to talk about, please catch me on openstack-neutron irc channel. — Best regards Slawek Kaplonski slawek at kaplonski.pl From balazs.gibizer at ericsson.com Tue Feb 13 09:25:43 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 13 Feb 2018 10:25:43 +0100 Subject: [openstack-dev] [nova] Notification update week 7 In-Reply-To: <25f4b50c-3f8e-7dc1-8d8a-074caff4adba@gmail.com> References: <1518455505.18558.2@smtp.office365.com> <25f4b50c-3f8e-7dc1-8d8a-074caff4adba@gmail.com> Message-ID: <1518513943.18558.3@smtp.office365.com> On Mon, Feb 12, 2018 at 8:47 PM, Matt Riedemann wrote: > On 2/12/2018 11:11 AM, Balázs Gibizer wrote: >> Add the user id and project id of the user initiated the instance >> action to the notification >> ----------------------------------------------------------------- >> The bp >> https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications >> was shortly discussed on the last nova weekyl meeting, there was no >> objection but it still pending approval. > > This is approved now. We agreed to approve this in the the Feb 8 > meeting, I just forgot to do it. Cool, thanks! Cheers, gibi > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rico.lin.guanyu at gmail.com Tue Feb 13 09:41:23 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Feb 2018 17:41:23 +0800 Subject: [openstack-dev] [heat] No Meeting This week In-Reply-To: References: Message-ID: > Looks heat-dashboard does not have stable/queens branch yet. > I got an indication about this from Horizon core and they are waiting for that heat-dashboard will have it. > (we want to drop django <= 1.10 support along with Horizon for Rocky) > Cloud you kindly take care of this issue ? Thanks Kaz Shinohara I target a release here now, hope it will fix the issue https://review.openstack.org/543866 I see you try to drop specific django, but send a revert request later. We can actually target specific commit as release point, so don't actually need to revert it (unless it break others). -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 13 10:27:18 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Feb 2018 11:27:18 +0100 Subject: [openstack-dev] [ptg] Selected post-lunch presentations Message-ID: Hi everyone, The TC members selected the topics for the post-lunch presentations at the PTG. The idea is to finish the lunch break with some light infusion of knowledge. Those will happen starting at 1pm in the lunch room. Discussions restart at 1:30pm so we recommend targeting a 20-min talk + 10-min Q&A (or 25/5). Here are the topics : Monday: Welcome to the PTG / housekeeping / set tone / situational awareness / release goals (coordinator: ttx) Tuesday: Infra/QA update, including Zuulv3 (andreaf, corvus) Wednesday: OpenStackSDK (mordred) Thursday: Release process (smcginnis, dhellmann) Friday: Lightning talks (contact thingee to book a slot) -- Thierry Carrez (ttx) From hjensas at redhat.com Tue Feb 13 10:40:20 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Tue, 13 Feb 2018 11:40:20 +0100 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: Message-ID: <1518518420.15968.6.camel@redhat.com> On Fri, 2018-02-09 at 14:39 -0800, Emilien Macchi wrote: > On Fri, Feb 9, 2018 at 2:30 PM, James Slagle > wrote: > [...] > > > You may want to add an item for the routed ctlplane work that > > landed > > at the end of Queens. Afaik, that will need to be supported with > > the > > containerized undercloud. > > Done: https://trello.com/c/kFtIkto1/17-routed-ctlplane-networking > Tanks Emilien, I added several work items to the Trello card, and a few patches. Still WiP. Do we have any CI that use containerized undercloud with actual Ironic deployement? Or are they all using deployed-server? E.g do we have anything actually testing this type of change? https://review.openstack.org/#/c/543582 I belive that would have to be an ovb job with containerized undercloud? --  | Harald Jensås         | hjensas:irc From a.chadin at servionica.ru Tue Feb 13 12:18:49 2018 From: a.chadin at servionica.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YA=?=) Date: Tue, 13 Feb 2018 12:18:49 +0000 Subject: [openstack-dev] [watcher] weekly meeting has changed Message-ID: <15B5F438-5635-415D-AAB3-0C2170CD7FB0@servionica.ru> Hi Watchers, As many of you asked, we have changed odd weekly meeting time from 13:00 UTC to 08:00 UTC. We will have meeting tomorrow at 08:00 UTC on #openstack-meeting-alt. Don’t forget to make changes in your calendar[1] ;) [1]: http://eavesdrop.openstack.org/#Watcher_Team_Meeting Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From beagles at redhat.com Tue Feb 13 14:02:39 2018 From: beagles at redhat.com (Brent Eagles) Date: Tue, 13 Feb 2018 10:32:39 -0330 Subject: [openstack-dev] [tripleo] [neutron] Current containerized neutron agents introduce a significant regression in the dataplane Message-ID: Hi, The neutron agents are implemented in such a way that key functionality is implemented in terms of haproxy, dnsmasq, keepalived and radvd configuration. The agents manage instances of these services but, by design, the parent is the top-most (pid 1). On baremetal this has the advantage that, while control plane changes cannot be made while the agents are not available, the configuration at the time the agents were stopped will work (for example, VMs that are restarted can request their IPs, etc). In short, the dataplane is not affected by shutting down the agents. In the TripleO containerized version of these agents, the supporting processes (haproxy, dnsmasq, etc.) are run within the agent's container so when the container is stopped, the supporting processes are also stopped. That is, the behavior with the current containers is significantly different than on baremetal and stopping/restarting containers effectively breaks the dataplane. At the moment this is being considered a blocker and unless we can find a resolution, we may need to recommend running the L3, DHCP and metadata agents on baremetal. Cheers, Brent Eagles Daniel Alvarez -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Feb 13 14:05:34 2018 From: gmann at ghanshyammann.com (gmann) Date: Tue, 13 Feb 2018 23:05:34 +0900 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? Message-ID: Hi Infra Team, I have 1 quick question on zuulv3 jobs and their migration part. From zuulv3 doc [1], it is clear about migrating the job definition and use those among cross repo pipeline etc. But I did not find clear recommendation that whether project's pipeline definition should stay in project-config or we should move that to project side. IMO, 'template' part(which has system level jobs) can stay in project-config. For example below part- https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 Other pipeline definition- 'check', 'gate', 'experimental' etc should be move to project repo, mainly this list- https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 If we move those past as mentioned above then, we can have a consolidated place to control the project pipeline for 'irrelevant-files', specific branch etc ..1 https://docs.openstack.org/infra/manual/zuulv3.html -gmann From emilien at redhat.com Tue Feb 13 14:41:38 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 13 Feb 2018 06:41:38 -0800 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: <1518518420.15968.6.camel@redhat.com> References: <1518518420.15968.6.camel@redhat.com> Message-ID: On Tue, Feb 13, 2018 at 2:40 AM, Harald Jensås wrote: > On Fri, 2018-02-09 at 14:39 -0800, Emilien Macchi wrote: > > On Fri, Feb 9, 2018 at 2:30 PM, James Slagle > > wrote: > > [...] > > > > > You may want to add an item for the routed ctlplane work that > > > landed > > > at the end of Queens. Afaik, that will need to be supported with > > > the > > > containerized undercloud. > > > > Done: https://trello.com/c/kFtIkto1/17-routed-ctlplane-networking > > > > Tanks Emilien, > > > I added several work items to the Trello card, and a few patches. Still > WiP. > > Do we have any CI that use containerized undercloud with actual Ironic > deployement? Or are they all using deployed-server? > > E.g do we have anything actually testing this type of change? > https://review.openstack.org/#/c/543582 > > I belive that would have to be an ovb job with containerized undercloud? > I'm working on it since last week: https://trello.com/c/uLqbHTip/13-switch-other-jobs-to-run-a-containerized-undercloud But currently trying to make things stable again, we introduce regressions and this is high prio now. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Tue Feb 13 14:45:43 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Tue, 13 Feb 2018 14:45:43 +0000 Subject: [openstack-dev] [monasca][congress] help configuring monasca for gate Message-ID: Hi Eric, glad to hear the problems are solved :) What are your plans around integration with Monasca? Please let us know if you have related feature requests. Cheers Witek > -----Original Message----- > From: Eric K [mailto:ekcs.openstack at gmail.com] > Sent: Dienstag, 13. Februar 2018 03:59 > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [monasca][congress] help configuring monasca > for gate > > Oops. Nevermind. Looks like it's working now. > > On 2/12/18, 5:00 PM, "Eric K" wrote: > > >Hi Monasca folks, > >I'm trying to configure monasca in congress gate [1] and modeled it > >after this monasca playbook [2]. But I get: > >rsync: change_dir "/home/zuul/src/*/openstack/monasca-common" failed: > >No such file or directory (2) > > > >http://logs.openstack.org/22/530522/1/check/congress-devstack-api-mysql > >/16 > >6 > >d935/logs/devstack-gate-setup-workspace-new.txt.gz#_2017-12- > 30_01_53_41 > >_60 > >7 > > > > > >Any hints on what I need to do differently? Thanks! > > > >[1] https://review.openstack.org/#/c/530522/ > >[2] > >https://github.com/openstack/monasca- > api/blob/master/playbooks/legacy/m > >ona > >s > >ca-tempest-base/run.yaml > > > > > > > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Tue Feb 13 15:06:10 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Feb 2018 10:06:10 -0500 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: References: Message-ID: <20180213150610.GA26600@localhost.localdomain> On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: > Hi Infra Team, > > I have 1 quick question on zuulv3 jobs and their migration part. From > zuulv3 doc [1], it is clear about migrating the job definition and use > those among cross repo pipeline etc. > > But I did not find clear recommendation that whether project's > pipeline definition should stay in project-config or we should move > that to project side. > > IMO, > 'template' part(which has system level jobs) can stay in > project-config. For example below part- > > https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 > > Other pipeline definition- 'check', 'gate', 'experimental' etc should > be move to project repo, mainly this list- > https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 > > If we move those past as mentioned above then, we can have a > consolidated place to control the project pipeline for > 'irrelevant-files', specific branch etc > > ..1 https://docs.openstack.org/infra/manual/zuulv3.html > As it works today, pipeline stanza needs to be in a config project[1] (project-config) repo. So what you are suggestion will not work. This was done to allow zuul admins to control which pipelines are setup / configured. I am mostly curious why a project would need to modify a pipeline configuration or duplicate it into all projects, over having it central located in project-config. [1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Tue Feb 13 15:08:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 13 Feb 2018 15:08:24 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-07 Message-ID: HTML: https://anticdent.org/tc-report-18-07.html A few things to report from the past week of TC interaction. Not much in the way of opportunities to opine or editorialize. # Goals Still more on the topic of OpenStack wide goals. There was some robust [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-07.log.html#t2018-02-07T12:56:13) about the [mox goal](https://governance.openstack.org/tc/goals/rocky/mox_removal.html) (eventually leading to accepting the goal, despite some reservations). That discussion somehow meandered into where gerrit is on the "least bad" to "most good" scale. # PostgreSQL and Triggers Later in the [same day](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-07.log.html#t2018-02-07T17:18:56) there was some discussion about the state of PostgreSQL support and the merit of using triggers to manage migrations. There were some pretty strong (and supported) assertions that triggers are not a good choice. # PowerStackers and Driver Projects [Discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-08.log.html#t2018-02-08T15:43:44) about having a [PowerStackers](https://review.openstack.org/#/c/540165/) project evolved into a review of the current thinking on dealing with projects that require access to special drivers or hardware. This is a common discussion in OpenStack because so much of the point of OpenStack is to provide an abstraction over _stuff_. # Prepping the PTG There's a growing [etherpad](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics) of topics to be discussed with the TC Friday morning at the PTG. You should feel free to add topics and show up and keep the TC in check. The room the meeting will be in [is glorious](https://crokepark.ie/meetings-events/spaces/space-listing/coiste-bainisti) in its criminal-mastermind-wonderfulness. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From alee at redhat.com Tue Feb 13 15:17:59 2018 From: alee at redhat.com (Ade Lee) Date: Tue, 13 Feb 2018 10:17:59 -0500 Subject: [openstack-dev] [barbican] weekly meeting time Message-ID: <1518535079.22990.9.camel@redhat.com> Hi all, The Barbican weekly meeting has been fairly sparsely attended for a little while now, and the most active contributors these days appear to be in Asia. Its time to consider moving the weekly meeting to a time when more contributors can attend. I'm going to propose a couple times below to start out. 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) Tuesday 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday Feel free to propose other days/times. Thanks, Ade P.S. Until decided otherwise, the Barbican meeting remains on Mondays at 2000 UTC From andrea.frittoli at gmail.com Tue Feb 13 15:34:36 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 13 Feb 2018 15:34:36 +0000 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: <20180213150610.GA26600@localhost.localdomain> References: <20180213150610.GA26600@localhost.localdomain> Message-ID: On Tue, Feb 13, 2018 at 3:06 PM Paul Belanger wrote: > On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: > > Hi Infra Team, > > > > I have 1 quick question on zuulv3 jobs and their migration part. From > > zuulv3 doc [1], it is clear about migrating the job definition and use > > those among cross repo pipeline etc. > > > > But I did not find clear recommendation that whether project's > > pipeline definition should stay in project-config or we should move > > that to project side. > > > > IMO, > > 'template' part(which has system level jobs) can stay in > > project-config. For example below part- I think there are pros and cons in both cases, but I lean more towards having everything in tree. If everything moves into the project then the configuration of what runs for a project is more or less in one place, so it's a bit more readable and projects are in control. On the other side adding a template maintained by infra/qa to a number of projects transforms into a potentially very large set of changes. But I don't think adding a new template happens so often, and it would still be possible for infra/qa to define usage of that template in project-config and then for projects to move that in tree over time. Andrea Frittoli (andreaf) > > > > https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 > > > > Other pipeline definition- 'check', 'gate', 'experimental' etc should > > be move to project repo, mainly this list- > > > https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 > > > > If we move those past as mentioned above then, we can have a > > consolidated place to control the project pipeline for > > 'irrelevant-files', specific branch etc > > > > ..1 https://docs.openstack.org/infra/manual/zuulv3.html > > > As it works today, pipeline stanza needs to be in a config project[1] > (project-config) repo. So what you are suggestion will not work. This was > done > to allow zuul admins to control which pipelines are setup / configured. > I think gmann referred to the list of jobs defined in each pipeline by a project as opposed to the definition of the pipeline itself. > > I am mostly curious why a project would need to modify a pipeline > configuration > or duplicate it into all projects, over having it central located in > project-config. > > [1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline > > > > -gmann > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue Feb 13 16:31:23 2018 From: gord at live.ca (gordon chung) Date: Tue, 13 Feb 2018 16:31:23 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-07 In-Reply-To: References: Message-ID: On 2018-02-13 10:08 AM, Chris Dent wrote: > > # PostgreSQL and Triggers > > Later in the [same > day](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-07.log.html#t2018-02-07T17:18:56) > > there was some discussion about the state of PostgreSQL support and > the merit of using triggers to manage migrations. There were some > pretty strong (and supported) assertions that triggers are not a good > choice. was there a resolution for this? iiuc, pgsql is not supported by glance based on: https://github.com/openstack/glance/commit/f268df1cbc3c356c472ace04bd4f2d4b3da6c026 i don't know if it was a bad commit but it seems to break any case that tries to use pgsql (except if the db has all migrations applied already). not making an opinion here, just asking as i have a patch[1] to remove pgsql gate from one of the telemetry projects that was affected and wondering if i should proceed. [1] https://review.openstack.org/#/c/542240/ cheers, -- gord From dimitri.pertin at inria.fr Tue Feb 13 16:56:10 2018 From: dimitri.pertin at inria.fr (Dimitri Pertin) Date: Tue, 13 Feb 2018 17:56:10 +0100 Subject: [openstack-dev] [FEMDC] Wed. 14 Feb - IRC Meeting 15:00 UTC Message-ID: <0572445f-e3e0-4ca0-e0c3-9cd8ca46ccad@inria.fr> Dear all, A gentle reminder for our tomorrow meeting at 15:00 UTC. A draft of the agenda is available at line 202, you are very welcome to add any item: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best regards, Dimitri -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Feb 13 17:51:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 13 Feb 2018 11:51:33 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-07 In-Reply-To: References: Message-ID: <4619c5f5-b811-3e78-35dd-de0f9e396e20@gmail.com> On 2/13/2018 10:31 AM, gordon chung wrote: > was there a resolution for this? iiuc, pgsql is not supported by glance > based on: > https://github.com/openstack/glance/commit/f268df1cbc3c356c472ace04bd4f2d4b3da6c026 > > i don't know if it was a bad commit but it seems to break any case that > tries to use pgsql (except if the db has all migrations applied already). > > not making an opinion here, just asking as i have a patch[1] to remove > pgsql gate from one of the telemetry projects that was affected and > wondering if i should proceed. > > [1]https://review.openstack.org/#/c/542240/ Wow, not even a release note on that glance change. As I commented in the review, if you're using postgresql, it doesn't matter if you don't care about rolling upgrade support for glance, you literally just can't upgrade *at all* to queens for glance now. Surely there must be some kind of way we can put in logic like, "if engine == postgresql: log a warning and do the offline schema migration" ? -- Thanks, Matt From gord at live.ca Tue Feb 13 18:05:43 2018 From: gord at live.ca (gordon chung) Date: Tue, 13 Feb 2018 18:05:43 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-07 In-Reply-To: References: Message-ID: On 2018-02-13 11:31 AM, gordon chung wrote: > > > was there a resolution for this? iiuc, pgsql is not supported by glance > based on: > https://github.com/openstack/glance/commit/f268df1cbc3c356c472ace04bd4f2d4b3da6c026 > err... nevermind. it seems https://github.com/openstack/glance/commit/106de18f326247e8d3f715452665b96dade6d2bc fixes it... or it seems i can run devstack. carry on :) -- gord From emilien at redhat.com Tue Feb 13 18:54:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 13 Feb 2018 10:54:31 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 9th Edition Message-ID: Note: this is the ninth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127034.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Focus is on releasing Queens RC1 and branching stable/queens before end of February if possible. See details bellow. +--> PTG planning continues on https://etherpad.openstack.org/p/tripleo-ptg-rocky +-----------------------+ | Blockers for RC1 | +-----------------------+ +--> https://review.openstack.org/#/c/543862/ +--> first RDO Queens promotion, with periodic-tripleo-centos-7-queens-* jobs. +--> We need https://review.openstack.org/#/c/539057/ to pass green, which proves CI is ready for stable/queens branch. +--> Land the last bits for composable networks: https://review.openstack.org/523638 (if our community agrees to do so) +--> Other? e.g. neutron-container issue, FFU, p-q upgrades (these things can be backported afterward). +--> Any inputs on these blockers are welcome, we aim to release & branch by around the PTG time probably. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Rafael and ruck is Sagi. Please let them know any new CI issue. +--> Master promotion is 15 days, Pike is 2 days and Ocata is 0 days. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Reviews are *highly* needed on FFU, Queens upgrade workflow +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Containerized undercloud has made progress in CI (well and broke other things but being fixed right now). +--> You can follow our work here: https://trello.com/b/nmGSNPoQ/containerized-undercloud +--------------+ | Integration | +--------------+ +--> Rocky planning, no major updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Team is still planning work in Rocky +--> Some good progress on planning Automated UI testing in CI +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Team is preparing the custom validation spec: https://review.openstack.org/#/c/393775 +--> Work on port network environment validation to use parameters from heat/mistral instead of templates +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Configuring VFs in the overcloud for non-tenant networking use (at risk). +--> Foundation routed networks support still ongoing, close to merging final patches. +--> IPSEC between overcloud and undercloud is being merged. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week, team is planning PTG. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +------------+ | Owl fact | +------------+ Many owl species have asymmetrical ears that are different sizes and different heights on their heads. This gives the birds superior hearing and the ability to pinpoint where prey is located, even if they can't see it. Source: https://www.thespruce.com/fun-facts-about-owls-387096 Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 13 19:14:55 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 13 Feb 2018 19:14:55 +0000 Subject: [openstack-dev] [Elections][Kolla][QA][Mistral] Last Days for Voting in the PTL Elections Message-ID: Hello Kolla, Mistral & QA contributors, Just a quick reminder that elections are closing soon, if you haven't already you should use your right to vote and pick your favourite candidate! You have until Feb 14, 2018 23:45 UTC. Thanks for your time! -Kendall Nelson(diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Tue Feb 13 19:55:11 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 13 Feb 2018 19:55:11 +0000 Subject: [openstack-dev] [automaton] How to extend automaton? Message-ID: <47EFB32CD8770A4D9590812EE28C977E9624DBF4@ALA-MBD.corp.ad.wrs.com> https://github.com/openstack/automaton Friendly state machines for python. A few questions about automaton. 1. I would like to know can we addition parameters on on_enter or on_exit callbacks. Right now, it seems it only allows state and triggered_event. a. I have many FSM running for different objects and it is much easier if I can pass on the some sort of ID back to the callbacks. 2. Can we or how can we store extra attribute like last state change timestamp? 3. Can we store additional identify info for the FSM object? Would like to add an UUID Thanks. Louie def print_on_enter(new_state, triggered_event): print("Entered '%s' due to '%s'" % (new_state, triggered_event)) def print_on_exit(old_state, triggered_event): print("Exiting '%s' due to '%s'" % (old_state, triggered_event)) # This will contain all the states and transitions that our machine will # allow, the format is relatively simple and designed to be easy to use. state_space = [ { 'name': 'stopped', 'next_states': { # On event 'play' transition to the 'playing' state. 'play': 'playing', 'open_close': 'opened', 'stop': 'stopped', }, 'on_enter': print_on_enter, 'on_exit': print_on_exit, }, -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Feb 13 19:57:45 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 13 Feb 2018 14:57:45 -0500 Subject: [openstack-dev] [tripleo][python3] python3 readiness? Message-ID: <20180213195745.pzucdooks24nbaqr@barron.net> Since python 2.7 will not be maintained past 2020 [1] it is a reasonable conjecture that downstream distributions will drop support for python 2 between now and then, perhaps as early as next year. In Pike, OpenStack projects, including TripleO, added python 3 unit tests. That effort was a good start, but likely we can agree that it is *only* a start to gaining confidence that real life TripleO deployments will "just work" running python 3. As agreed in the TripleO community meeting, this email is intended to kick off a discussion in advance of PTG on what else needs to be done. In this regard it is worth observing that TripleO currently only supports CentOS deployments and CentOS won't have python 3 support until RHEL does, which may be too late to test deploying with python3 before support for python2 is dropped. Fedora does have support for python 3 and for this reason RDO has decided [2] to begin work to run with *stabilized* Fedora repositories in the Rocky cycle, aiming to be ready on time to migrate to Python 3 and support its use in downstream and upstream CI pipelines. -- Tom Barron [1] https://pythonclock.org/ [2] https://lists.rdoproject.org/pipermail/dev/2018-February/008542.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From miguel at mlavalle.com Tue Feb 13 20:56:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 13 Feb 2018 14:56:45 -0600 Subject: [openstack-dev] [neutron] Rocky PTG team dinner Message-ID: Dear Neutrinos, We will have our traditional PTG team dinner on Thursday March 1st at 7pm. The place is to be defined. In the meantime, if you will attend the PTG, please put your name in the "Attendees" section at to the top of the etherpad (https://etherpad.openstack.org/p/neutron-ptg-rocky) with the following information: name - (irc nickname) - days in Dublin - Yes / No attending team dinner Looking forward to see you in Dublin! Cheers Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Feb 13 22:06:54 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 13 Feb 2018 16:06:54 -0600 Subject: [openstack-dev] [keystone] priority review etherpad In-Reply-To: <3a19f88f-2bb5-1291-92d7-809a76521f76@gmail.com> References: <014cb24b-9f0d-e800-db87-979bc85c4741@gmail.com> <3a19f88f-2bb5-1291-92d7-809a76521f76@gmail.com> Message-ID: Following this up since we have a few bugs that need to be addressed for RC2, all of which are tagged and targeted appropriately in Launchpad for the stable/queens backport process [0]. All bugs have fixes in review to both master and stable/queens [1]. I'll keep an eye on them and respond to comments as soon as possible. Any reviews here would be greatly appreciated. As always, let me know if you notice something we need to address before we cut RC2 or if you have any questions about any of the patches proposed. Thanks, Lance [0] https://goo.gl/3HdWuX [1] https://goo.gl/RhxzoM On Sun, Jan 21, 2018 at 12:12 PM, Lance Bragstad wrote: > Actually - we've tracked this kind of work with etherpad in the past and > it becomes cumbersome and duplicates information after a while. Instead, I > built a dashboard to do this [0]. > > Please refer to that for the last and greatest things to review. > > [0] https://goo.gl/NWdAH7 > > > On 01/21/2018 08:19 AM, Lance Bragstad wrote: > > Happy NFL Divisional Playoff Day, > > We're getting down to the wire and I decided to make an etherpad to > track feature reviews that we need to land before feature freeze [0]. > I'll attempt to keep it updated the best I can. If you're looking for > things to review, it's a great place to start. If I missed something > that needs to be added to the list, please let me know. > > Thanks > > [0] https://etherpad.openstack.org/p/keystone-queens-release-sprint > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armamig at gmail.com Tue Feb 13 22:08:05 2018 From: armamig at gmail.com (Armando M.) Date: Tue, 13 Feb 2018 22:08:05 +0000 Subject: [openstack-dev] [tripleo] [neutron] Current containerized neutron agents introduce a significant regression in the dataplane In-Reply-To: References: Message-ID: On 13 February 2018 at 14:02, Brent Eagles wrote: > Hi, > > The neutron agents are implemented in such a way that key functionality is > implemented in terms of haproxy, dnsmasq, keepalived and radvd > configuration. The agents manage instances of these services but, by > design, the parent is the top-most (pid 1). > > On baremetal this has the advantage that, while control plane changes > cannot be made while the agents are not available, the configuration at the > time the agents were stopped will work (for example, VMs that are restarted > can request their IPs, etc). In short, the dataplane is not affected by > shutting down the agents. > > In the TripleO containerized version of these agents, the supporting > processes (haproxy, dnsmasq, etc.) are run within the agent's container so > when the container is stopped, the supporting processes are also stopped. > That is, the behavior with the current containers is significantly > different than on baremetal and stopping/restarting containers effectively > breaks the dataplane. At the moment this is being considered a blocker and > unless we can find a resolution, we may need to recommend running the L3, > DHCP and metadata agents on baremetal. > > There's quite a bit to unpack here: are you suggesting that running these services in HA configuration doesn't help either with the data plane being gone after a stop/restart? Ultimately this boils down to where the state is persisted, and while certain agents rely on namespaces and processes whose ephemeral nature is hard to persist, enough could be done to allow for a non-disruptive bumping of the afore mentioned services. Thanks, Armando > Cheers, > > Brent Eagles > Daniel Alvarez > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Feb 13 22:36:42 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 13 Feb 2018 14:36:42 -0800 Subject: [openstack-dev] [all][infra] PTG Infra Helproom Info and Signup Message-ID: <1518561402.929387.1269899112.764A7EF6@webmail.messagingengine.com> Hello everyone, Last PTG the infra helproom seemed to work out for projects that knew about it. The biggest problem seemed to be that other projects either just weren't aware that there is/was an Infra helproom or didn't know when an appropriate time to show up would be. We are going to try a couple things this time around to try and address those issues. First of all the Infra team is hosting a helproom at the Dublin PTG. Now you should all know :) The idea is that if projects or individuals have questions for the infra team or problems that we can help you with there is time set aside specifically for this. I'm not sure what room we will be in, you will have to look at the map, but we have the entirety of Monday and Tuesday set aside for this. To address the second issue of not knowing when a good time would be I have put together a sign up sheet at https://ethercalc.openstack.org/cvro305izog2 that projects or individuals can use to claim specific times that we can dedicate to them. Currently there are 12x 1hour long slots over the course of the two days. If we need more slots we can probably add an extra hour to each day at the end of the day. If there are conflicts I'm sure we will be able to share two different projects in the help room during one slot (but if you schedule one of these let us know so we are prepared for it). The ethercalc mentions this too but the schedule isn't supposed to be set in stone, we can be flexible. I just wanted to make sure there was enough structure this time around to make it easier for people to find the infra team and get the help they need. So now you get to go and sign up and we'll see you at the PTG. Thank you, Clark From assaf at redhat.com Tue Feb 13 22:48:32 2018 From: assaf at redhat.com (Assaf Muller) Date: Tue, 13 Feb 2018 17:48:32 -0500 Subject: [openstack-dev] [neutron] Generalized issues in the unit testing of ML2 mechanism drivers In-Reply-To: References: Message-ID: On Wed, Dec 13, 2017 at 7:30 AM, Michel Peterson wrote: > Through my work in networking-odl I've found what I believe is an issue > present in a majority of ML2 drivers. An issue I think needs awareness so > each project can decide a course of action. > > The issue stems from the adopted practice of importing > `neutron.tests.unit.plugins.ml2.test_plugin` and creating classes with noop > operation to "inherit" tests for free [1]. The idea behind is nice, you > inherit >600 tests that cover several scenarios. > > There are several issues of adopting this pattern, two of which are > paramount: > > 1. If the mechanism driver is not loaded correctly [2], the tests then don't > test the mechanism driver but still succeed and therefore there is no > indication that there is something wrong with the code. In the case of > networking-odl it wasn't discovered until last week, which means that for >1 > year it this was adding PASSed tests uselessly. > > 2. It gives a false sense of reassurance. If the code of those tests is > analyzed it's possible to see that the code itself is mostly centered around > testing the REST endpoint of neutron than actually testing that the > mechanism succeeds on the operation it was supposed to test. As a result of > this, there is marginally added value on having those tests. To be clear, > the hooks for the respective operations are called on the mechanism driver, > but the result of the operation is not asserted. > > I would love to hear more voices around this, so feel free to comment. > > Regarding networking-odl the solution I propose is the following: > **First**, discard completely the change mentioned in the footnote #2. > **Second**, create a patch that completely removes the tests that follow > this pattern. An interesting exercise would be to add 'raise ValueError' type exceptions in various ODL ML2 mech driver flows and seeing which tests fail. Basically, if a test passes without the ODL mech driver loaded, or with a faulty ODL mech driver, then you don't need to run the test for networking-odl changes. I'd be hesitant to remove all tests though, it's a good investment of time to figure out which tests are valuable to you. > **Third**, incorporate the neutron tempest plugin into the CI and rely on > that for assuring coverage of the different scenarios. > > Also to mention that when discovered this issue in networking-odl we took a > decision not to merge more patches until the PS of footnote #2 was > addressed. I think we can now decide to overrule that decision and proceed > as usual. > > > > [1]: http://codesearch.openstack.org/?q=class%20.*\(.*TestMl2 > [2]: something that was happening in networking-odl and addressed by > https://review.openstack.org/#/c/523934 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From assaf at redhat.com Tue Feb 13 22:49:36 2018 From: assaf at redhat.com (Assaf Muller) Date: Tue, 13 Feb 2018 17:49:36 -0500 Subject: [openstack-dev] [neutron] [OVN] L3 traffic In-Reply-To: References: Message-ID: I'm not aware of plans for OVN to supported distributed SNAT, therefor a networking node will still be required for the foreseeable future. On Mon, Jan 15, 2018 at 2:18 AM, wenran xiao wrote: > Hey all, > I have found Network OVN will support to distributed floating ip > (https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html), > how about the snat in the future? Still need network node or not? > Any suggestions are welcomed. > > > Best regards > Ran > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Tue Feb 13 22:53:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 13 Feb 2018 16:53:35 -0600 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <20180213195745.pzucdooks24nbaqr@barron.net> References: <20180213195745.pzucdooks24nbaqr@barron.net> Message-ID: <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> On 02/13/2018 01:57 PM, Tom Barron wrote: > Since python 2.7 will not be maintained past 2020 [1] it is a reasonable > conjecture that downstream distributions > will drop support for python 2 between now and then, perhaps as early as > next year. I'm not sure I agree. I suspect python 2 support will not go quietly into that good night. Personally I anticipate a lot of kicking and screaming right up to the end, especially from change averse enterprise users. But that's neither here nor there. I think we're all in agreement that python 3 support is needed. :-) > In Pike, OpenStack projects, including TripleO, added python 3 unit > tests.  That effort was a good start, but likely we can agree that it is > *only* a start to gaining confidence that real life TripleO deployments > will "just work" running python 3.  As agreed in the TripleO community > meeting, this email is intended to kick off a discussion in advance of > PTG on what else needs to be done. > > In this regard it is worth observing that TripleO currently only > supports CentOS deployments and CentOS won't have python 3 support until > RHEL does, which may be too late to test deploying with python3 before > support for python2 is dropped.  Fedora does have support for python 3 > and for this reason RDO has decided [2] to begin work to run with > *stabilized* Fedora repositories in the Rocky cycle, aiming to be ready > on time to migrate to Python 3 and support its use in downstream and > upstream CI pipelines. So that means we'll never have Python 3 on CentOS 7 and we need to start supporting Fedora again in order to do functional testing on py3? That's potentially messy. My recollection of running TripleO CI on Fedora is that it was, to put it nicely, a maintenance headache. Even with the "stabilized" repos from RDO, TripleO has a knack for hitting edge case bugs in a fast-moving distro like Fedora. I guess it's not entirely clear to me what the exact plan is since there's some discussion of frozen snapshots and such, which might address the fast-moving part. It also means more CI jobs, unless we're okay with dropping CentOS support for some scenarios and switching them to Fedora. Given the amount of changes between CentOS 7 and current Fedora that's a pretty big gap in our testing. I guess if RDO has chosen this path then we don't have much choice. As far as next steps, the first thing that would need to be done is to get TripleO running on Fedora again. I suggest starting with https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377 :-) -Ben From melwittt at gmail.com Tue Feb 13 23:20:34 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 13 Feb 2018 15:20:34 -0800 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> <99F0F3E9-58B3-41B5-924E-F66002AFC1D1@leafe.com> Message-ID: > On Feb 12, 2018, at 16:06, Zhipeng Huang wrote: > > Let's settle on Tuesday afternoon session then, thanks a lot :) Do we have a proposed time and place for the session already? I checked the cyborg etherpad [1] and it looks like we’re thinking 2:00pm on Tuesday. Do we need to reserve a room for the discussion or do you already have a room we can join? Thanks, -melanie [1] https://etherpad.openstack.org/p/cyborg-ptg-rocky From dmsimard at redhat.com Tue Feb 13 23:30:19 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 13 Feb 2018 18:30:19 -0500 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> Message-ID: On Tue, Feb 13, 2018 at 5:53 PM, Ben Nemec wrote: > > I guess if RDO has chosen this path then we don't have much choice. This makes it sound like we had a choice to begin with. We've already had a lot of discussions around the topic but we're ultimately stuck between a rock and a hard place. We're in this together and it's important that everyone understands what's going on. It's not a secret to anyone that Fedora is more or less the upstream to RHEL. There's no py3 available in RHEL 7. The alternative to making things work in Fedora is to use Software Collections [1]. If you're not familiar with Software Collections for python, it's more or less the installation of RPM packages in a virtualenv. Installing the "rh-python35" SCL would: - Set up a chroot in /opt/rh/rh-python35/root - Set up a py35 interpreter at /opt/rh/rh-python35/root/usr/bin/python3 And then, when you would install packages *against* that SCL, they would end up being installed in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/. That means that you need *all* of your python packages to be built against the software collections and installed in the right path. Python script with a #!/usr/bin/python shebang ? Probably not going to work. Need python-requests ? Nope, sclo-python35-python-requests. Need one of the 1000+ python packages maintained by RDO ? Those need to be re-built and maintained against the SCL too. If you want to see what it looks like in practice, here's a Zuul spec file [2] or the official docs for SCL [3]. Making stuff work on Fedora is not going to be easy for anyone but it sure beats messing with 1500+ packages that we'd need to untangle later. Most of the hard work for Fedora is already done as far as packaging is concerned, we never really stopped building packages for Fedora [4]. It means we should be prepared once RHEL 8 comes out. [1]: https://www.softwarecollections.org/en/ [2]: https://softwarefactory-project.io/r/gitweb?p=scl/zuul-distgit.git;a=blob;f=zuul.spec;h=6bba6a79c1f8ff844a9ea3715ab2cef1b12d323f;hb=refs/heads/master [3]: https://www.softwarecollections.org/en/docs/guide/#chap-Packaging_Software_Collections [4]: https://trunk.rdoproject.org/fedora-rawhide/report.html David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From soulxu at gmail.com Tue Feb 13 23:57:22 2018 From: soulxu at gmail.com (Alex Xu) Date: Wed, 14 Feb 2018 07:57:22 +0800 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> Message-ID: +1, I'm interested also. 2018-02-12 23:27 GMT+08:00 Eric Fried : > I'm interested. No date/time preference so far as long as it sticks to > Monday/Tuesday. > > efried > > On 02/12/2018 09:13 AM, Zhipeng Huang wrote: > > Hi Nova team, > > > > Cyborg will have ptg sessions on Mon and Tue from 2:00pm to 6:00pm, and > > we would love to invite any of you guys who is interested in nova-cyborg > > interaction to join the discussion. The discussion will mainly focus on: > > > > (1) Cyborg team recap on the resource provider features that are > > implemented in Queens. > > (2) Joint discussion on what will be the impact on Nova side and future > > collaboration areas. > > > > The session is planned for 40 mins long. > > > > If you are interested plz feedback which date best suit for your > > arrangement so that we could arrange the topic accordingly :) > > > > Thank you very much. > > > > > > > > -- > > Zhipeng (Howard) Huang > > > > Standard Engineer > > IT Standard & Patent/IT Product Line > > Huawei Technologies Co,. Ltd > > Email: huangzhipeng at huawei.com > > Office: Huawei Industrial Base, Longgang, Shenzhen > > > > (Previous) > > Research Assistant > > Mobile Ad-Hoc Network Lab, Calit2 > > University of California, Irvine > > Email: zhipengh at uci.edu > > Office: Calit2 Building Room 2402 > > > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Feb 14 00:16:16 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 13 Feb 2018 16:16:16 -0800 Subject: [openstack-dev] [ptg][congress] Congress Rocky brainstorming & planning Message-ID: Hi all, In lieu of planning sessions at the PTG, let's have asynchronous brainstorming and a sync-up telecon. (I will still be available at the PTG for discussions). If you're interested, please: 1. Jot down your thoughts and ideas (problems, features, use cases, etc.) in this planning/brainstorming etherpad: https://etherpad.openstack.org/p/congress-rocky-brainstorm 2. Indicate your likely availability in this calendar (targeting the week after PTG): http://whenisgood.net/2yxtikn Thanks so much! Eric Kao From gmann at ghanshyammann.com Wed Feb 14 00:28:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Feb 2018 09:28:06 +0900 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: <20180213150610.GA26600@localhost.localdomain> References: <20180213150610.GA26600@localhost.localdomain> Message-ID: On Wed, Feb 14, 2018 at 12:06 AM, Paul Belanger wrote: > On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: >> Hi Infra Team, >> >> I have 1 quick question on zuulv3 jobs and their migration part. From >> zuulv3 doc [1], it is clear about migrating the job definition and use >> those among cross repo pipeline etc. >> >> But I did not find clear recommendation that whether project's >> pipeline definition should stay in project-config or we should move >> that to project side. >> >> IMO, >> 'template' part(which has system level jobs) can stay in >> project-config. For example below part- >> >> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 >> >> Other pipeline definition- 'check', 'gate', 'experimental' etc should >> be move to project repo, mainly this list- >> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 >> >> If we move those past as mentioned above then, we can have a >> consolidated place to control the project pipeline for >> 'irrelevant-files', specific branch etc >> >> ..1 https://docs.openstack.org/infra/manual/zuulv3.html >> > As it works today, pipeline stanza needs to be in a config project[1] > (project-config) repo. So what you are suggestion will not work. This was done > to allow zuul admins to control which pipelines are setup / configured. > > I am mostly curious why a project would need to modify a pipeline configuration > or duplicate it into all projects, over having it central located in > project-config. pipeline stanza and configuration stay in project-config. I mean list of jobs defined in each pipeline for specific project for example here[2]. Now we have list of jobs for each pipeline in 2 places, one in project-config [2] and second in project repo[3]. Issue in having it in 2 places: - No single place to check what all jobs project will run with what conditions - If we need to modify the list of jobs in pipeline or change other bits like irrelevant-files etc then it has to be done in project-config. So no full control by project side. > > [1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline ..2 https://github.com/openstack-infra/project-config/blob/ba2b7fb5dfee02ff11dde877c973b40815ab7838/zuul.d/projects.yaml#L10524-L11019 ..3 https://github.com/openstack/nova/blob/87036b4b27945b6ae34b57e6ee15dd76eb7f726a/.zuul.yaml#L104-L119 >> >> -gmann >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Wed Feb 14 00:32:59 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Feb 2018 09:32:59 +0900 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: References: <20180213150610.GA26600@localhost.localdomain> Message-ID: On Wed, Feb 14, 2018 at 12:34 AM, Andrea Frittoli wrote: > > > On Tue, Feb 13, 2018 at 3:06 PM Paul Belanger wrote: >> >> On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: >> > Hi Infra Team, >> > >> > I have 1 quick question on zuulv3 jobs and their migration part. From >> > zuulv3 doc [1], it is clear about migrating the job definition and use >> > those among cross repo pipeline etc. >> > >> > But I did not find clear recommendation that whether project's >> > pipeline definition should stay in project-config or we should move >> > that to project side. >> > >> > IMO, >> > 'template' part(which has system level jobs) can stay in >> > project-config. For example below part- > > > I think there are pros and cons in both cases, but I lean more towards > having everything > in tree. > > If everything moves into the project then the configuration of what runs for > a project is more > or less in one place, so it's a bit more readable and projects are in > control. > > On the other side adding a template maintained by infra/qa to a number of > projects transforms > into a potentially very large set of changes. But I don't think adding a new > template happens > so often, and it would still be possible for infra/qa to define usage of > that template in project-config > and then for projects to move that in tree over time. Yes, i agree on that. Currently I thought of keeping them in project-config as they are more system level mandatory things which should run on each project. But yes, we can move those to project repo over time. > > Andrea Frittoli (andreaf) > >> > >> > >> > https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 >> > >> > Other pipeline definition- 'check', 'gate', 'experimental' etc should >> > be move to project repo, mainly this list- >> > >> > https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 >> > >> > If we move those past as mentioned above then, we can have a >> > consolidated place to control the project pipeline for >> > 'irrelevant-files', specific branch etc >> > >> > ..1 https://docs.openstack.org/infra/manual/zuulv3.html >> > >> As it works today, pipeline stanza needs to be in a config project[1] >> (project-config) repo. So what you are suggestion will not work. This was >> done >> to allow zuul admins to control which pipelines are setup / configured. > > > I think gmann referred to the list of jobs defined in each pipeline by a > project > as opposed to the definition of the pipeline itself. Yes, i mean "list of jobs in each pipeline per project" not "pipeline definition or configuration". I think i mixed the term for both :). Thanks. > >> >> >> I am mostly curious why a project would need to modify a pipeline >> configuration >> or duplicate it into all projects, over having it central located in >> project-config. >> >> [1] https://docs.openstack.org/infra/zuul/user/config.html#pipeline >> > >> > -gmann >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -gmann From hongbin034 at gmail.com Wed Feb 14 01:30:44 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 13 Feb 2018 20:30:44 -0500 Subject: [openstack-dev] [Zun] Meeting cancel Message-ID: Hi team, We won't have team meetings in the next two weeks. This is because next week is Lunar New Year and the next next week is the PTG. We will resume the weekly team meeting at Mar 6, 2018. Please find the schedule in: https://wiki.openstack.org/wiki/Zun#Meetings . Happy holiday everyone! Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 14 02:50:20 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Feb 2018 11:50:20 +0900 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <874ln8v6ye.fsf@meyer.lemoncheese.net> References: <1516975504.9811.5@smtp.office365.com> <874ln8v6ye.fsf@meyer.lemoncheese.net> Message-ID: On Sat, Jan 27, 2018 at 2:57 AM, James E. Blair wrote: > Balázs Gibizer writes: > >> Hi, >> >> I'm getting more and more confused how the zuul job hierarchy works or >> is supposed to work. > > Hi! > > First, you (or others) may or may not have seen this already -- some of > it didn't exist when we first rolled out v3, and some of it has changed > -- but here are the relevant bits of the documentation that should help > explain what's going on. It helps to understand freezing: > > https://docs.openstack.org/infra/zuul/user/config.html#job > > and matching: > > https://docs.openstack.org/infra/zuul/user/config.html#matchers > >> First there was a bug in nova that some functional tests are not >> triggered although the job (re-)definition in the nova part of the >> project-config should not prevent it to run [1]. >> >> There we figured out that irrelevant-files parameter of the jobs are >> not something that can be overriden during re-definition or through >> parent-child relationship. The base job openstack-tox-functional has >> an irrelevant-files attribute that lists '^doc/.*$' as a path to be >> ignored [2]. In the other hand the nova part of the project-config >> tries to make this ignore less broad by adding only '^doc/source/.*$' >> . This does not work as we expected and the job did not run on changes >> that only affected ./doc/notification_samples path. We are fixing it >> by defining our own functional job in nova tree [4]. >> >> [1] https://bugs.launchpad.net/nova/+bug/1742962 >> [2] >> https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 >> [3] >> https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 >> [4] https://review.openstack.org/#/c/533210/ > > This is correct. The issue here is that the irrelevant-files definition > on openstack-tox-functional is too broad. We need to be *extremely* > careful applying matchers to jobs like that. Generally I think that > irrelevant-files should be reserved for the project-pipeline invocations > only. That's how they were effectively used in Zuul v2, after all. > > Essentially, when someone puts an irrelevant-files section on a job like > that, they are saying "this job will never apply to these files, ever." > That's clearly not correct in this case. > > So our solutions are to acknowledge that it's over-broad, and reduce or > eliminate the list in [2] and expand it elsewhere (as in [3]). Or we > can say "we were generally correct, but nova is extra special so it > needs its own job". If that's the choice, then I think [4] is a fine > solution. > >> Then I started looking into other jobs to see if we made similar >> mistakes. I found two other examples in the nova related jobs where >> redefining the irrelevant-files of a job caused problems. In these >> examples nova tried to ignore more paths during the override than what >> was originally ignored in the job definition but that did not work >> [5][6]. >> >> [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) > > As noted in that bug, the tempest-full job is invoked on nova via this > stanza: > > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688 > > As expected, that did not match. There is a second invocation of > tempest-full on nova here: > > http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126 > > That has no irrelevant-files matches, and so matches everything. If you > drop the use of that template, it will work as expected. Or, if you can > say with some certainty that nova's irrelevant-files set is not > over-broad, you could move the irrelevant-files from nova's invocation > into the template, or even the job, and drop nova's individual > invocation. > >> [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) > > The same template invokes this job as well. > >> So far the problem seemed to be consistent (i.e. override does not >> work). But then I looked into neutron-grenade-multinode. That job is >> defined in neutron tree (like neutron-grenade) but nova also refers to >> it in nova section of the project-config with different >> irrelevant-files than their original definition. So I assumed that >> this will lead to similar problem than in case of neutron-grenade, but >> it doesn't. >> >> The neutron-grenade-multinode original definition [1] does not try to >> ignore the 'nova/tests' path but the nova side of the definition in >> the project config does try to ignore that path [8]. Interestingly a >> patch in nova that only changes under the path: nova/tests/ does not >> trigger the job [9]. So in this case overriding the irrelevant-files >> of a job works. (It seems that overriding neutron-tempest-linuxbridge >> irrelevant-files works too). >> >> [7] >> https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 >> [8] >> https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 >> [9] https://review.openstack.org/#/c/537936/ >> >> I don't see what is the difference between neutron-grenade and >> neutron-grenade-multinode jobs definitions from this perspective but >> it seems that the irrelevent-files attribute behaves inconsistently >> in these two jobs. Could you please help me undestand how >> irrelevant-files in overriden jobs supposed to work? > > These jobs only have the one invocation -- on the nova project -- and > are not added via a template. > > Hopefully that explains the difference. > > Basically, the irrelevant-files on at least one project-pipeline > invocation of a job have to match, as well as at least one definition of > the job. So if both things have irrelevant-files, then it's effectively > a union of the two. Thanks a lot for clarifying those bits. I thought irrelevant-files are always overridden not append/union as they are regular expression or list of regular expressions. It is clear now that where irrelevant-files are present i both job definition and project-pipeline invocation job list then it is union of these two. Is it same for inherited job case also or it is completely overridden? I mean if base job has irrelevant-files and then inherited job from this base job define irrelevant-files then, is it union or overridden. One more question, as mentioned in doc that 'irrelevant-files' and 'files' should not be defined together in single job. what happen if it is defined? as base job usually have 'irrelevant-files' and if any inherited job defined 'files' then, is it error or unexpected behaviour ? Because answer to above query convey whether we should declare the 'irrelevant-files', 'files' in job definition or not ? > > I used a tool to help verify some of the information in this message, > especially the bugs [5] and [6]. You can ask Zuul to output debug > information about its job selection if you're dealing with confusing > situations like this. I went ahead and pushed a new patchset to your > test change to demonstrate how: > > https://review.openstack.org/537936 > > When it finishes running all the tests (in a few hours), it should > include in its report debug information about the decision-making > process for the jobs it ran. It outputs similar information into the > debug logs; so that we don't have to wait for it to see what it looks > like here is that copy: > > http://paste.openstack.org/show/653729/ > > The relevant lines for [5] are: > > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant matched > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant did not match > > Note the project-file-branch-line-number references are especially > helpful. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -gmann From zhipengh512 at gmail.com Wed Feb 14 03:14:04 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 14 Feb 2018 11:14:04 +0800 Subject: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion In-Reply-To: References: <8dc83751-af2d-5f55-fefc-8a570be9680c@fried.cc> <4ad0d302-2839-ad6e-159f-3c509aaaa7f0@gmail.com> <99F0F3E9-58B3-41B5-924E-F66002AFC1D1@leafe.com> Message-ID: @melanie yes Cyborg has its meeting room I think :) Room details should be fixed this week or early next week I suppose. So it will be 2:00pm on Tuesday, first thing of buisness after lunch :) On Wed, Feb 14, 2018 at 7:20 AM, melanie witt wrote: > > On Feb 12, 2018, at 16:06, Zhipeng Huang wrote: > > > > Let's settle on Tuesday afternoon session then, thanks a lot :) > > Do we have a proposed time and place for the session already? I checked > the cyborg etherpad [1] and it looks like we’re thinking 2:00pm on Tuesday. > Do we need to reserve a room for the discussion or do you already have a > room we can join? > > Thanks, > -melanie > > [1] https://etherpad.openstack.org/p/cyborg-ptg-rocky > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed Feb 14 04:24:20 2018 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Feb 2018 05:24:20 +0100 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> Message-ID: 2018-02-13 23:53 GMT+01:00 Ben Nemec : > > > On 02/13/2018 01:57 PM, Tom Barron wrote: >> >> Since python 2.7 will not be maintained past 2020 [1] it is a reasonable >> conjecture that downstream distributions >> will drop support for python 2 between now and then, perhaps as early as >> next year. > > > I'm not sure I agree. I suspect python 2 support will not go quietly into > that good night. Personally I anticipate a lot of kicking and screaming > right up to the end, especially from change averse enterprise users. > > But that's neither here nor there. I think we're all in agreement that > python 3 support is needed. :-) > >> In Pike, OpenStack projects, including TripleO, added python 3 unit tests. >> That effort was a good start, but likely we can agree that it is *only* a >> start to gaining confidence that real life TripleO deployments will "just >> work" running python 3. As agreed in the TripleO community meeting, this >> email is intended to kick off a discussion in advance of PTG on what else >> needs to be done. >> >> In this regard it is worth observing that TripleO currently only supports >> CentOS deployments and CentOS won't have python 3 support until RHEL does, >> which may be too late to test deploying with python3 before support for >> python2 is dropped. Fedora does have support for python 3 and for this >> reason RDO has decided [2] to begin work to run with *stabilized* Fedora >> repositories in the Rocky cycle, aiming to be ready on time to migrate to >> Python 3 and support its use in downstream and upstream CI pipelines. > > > So that means we'll never have Python 3 on CentOS 7 and we need to start > supporting Fedora again in order to do functional testing on py3? That's > potentially messy. My recollection of running TripleO CI on Fedora is that > it was, to put it nicely, a maintenance headache. Even with the > "stabilized" repos from RDO, TripleO has a knack for hitting edge case bugs > in a fast-moving distro like Fedora. I guess it's not entirely clear to me > what the exact plan is since there's some discussion of frozen snapshots and > such, which might address the fast-moving part. > > It also means more CI jobs, unless we're okay with dropping CentOS support > for some scenarios and switching them to Fedora. Given the amount of > changes between CentOS 7 and current Fedora that's a pretty big gap in our > testing. > > I guess if RDO has chosen this path then we don't have much choice. As far > as next steps, the first thing that would need to be done is to get TripleO > running on Fedora again. I suggest starting with > https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377 > :-) > > -Ben > RDO has *yet* to choose a plan, and people were invited to work on the "stabilized" repository draft [0]. If anyone has a better plan that fits all the constraints, please share it asap. Whatever the plan, we're launching it with the Rocky cycle. Among the constraints (but not limited to): * EL8 is not available * No Python3 on EL7 *and* no allocated resources to maintain it (that includes rebuilding/maintaining *all* python modules + libraries) * Bridge the gap between EL7 and EL8, Fedora 27/28 are the closest thing we have to EL8 [1][2] * SCL have a cost (and I cannot yet expose why but not jumping onto the SCL bandwagon has proven to be the right bet) * Have something stable enough so that upstream gate can use it. That's why plan stress that updates will be gated (definition of how is still open) * Manage to align planets so that we can ship version X of OpenStack [3] on EL8 without additional delay Well, I cannot say that I can't relate to what you're saying, though. [4] Regards, H. [0] https://etherpad.openstack.org/p/stabilized-fedora-repositories-for-openstack [1] Do not assume anything on EL8 (name included) it's more complicated than that. [2] Take a breath, but we might have to ship RDO as modules, not just RPMs or Containers. I already have headaches about it. [3] Do not ask which one, I do not know :) [4] Good thing that next PTG will be in Dublin, I'll need a lot of irish whiskey :) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nusiddiq at redhat.com Wed Feb 14 04:24:42 2018 From: nusiddiq at redhat.com (Numan Siddique) Date: Wed, 14 Feb 2018 09:54:42 +0530 Subject: [openstack-dev] [neutron] [OVN] L3 traffic In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 4:19 AM, Assaf Muller wrote: > I'm not aware of plans for OVN to supported distributed SNAT, therefor > a networking node will still be required for the foreseeable future. > > On Mon, Jan 15, 2018 at 2:18 AM, wenran xiao wrote: > > Hey all, > > I have found Network OVN will support to distributed floating ip > > (https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html > ), > > how about the snat in the future? Still need network node or not? > > Any suggestions are welcomed. > OVN can select any node (or nodes if HA is enabled) to schedule a router as long as the node has ovn-controller service running in it and ovn-bridge-mappings configured properly. So, If you have external connectivity in your compute nodes and you are fine with any of these compute nodes doing the centralized snat, you don't need to have a network node. Thanks Numan > > > > > Best regards > > Ran > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Wed Feb 14 04:40:05 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 13 Feb 2018 23:40:05 -0500 Subject: [openstack-dev] [tripleo] [neutron] Current containerized neutron agents introduce a significant regression in the dataplane In-Reply-To: References: Message-ID: On 02/13/2018 05:08 PM, Armando M. wrote: > > > On 13 February 2018 at 14:02, Brent Eagles > wrote: > > Hi, > > The neutron agents are implemented in such a way that key > functionality is implemented in terms of haproxy, dnsmasq, > keepalived and radvd configuration. The agents manage instances of > these services but, by design, the parent is the top-most (pid 1). > > On baremetal this has the advantage that, while control plane > changes cannot be made while the agents are not available, the > configuration at the time the agents were stopped will work (for > example, VMs that are restarted can request their IPs, etc). In > short, the dataplane is not affected by shutting down the agents. > > In the TripleO containerized version of these agents, the supporting > processes (haproxy, dnsmasq, etc.) are run within the agent's > container so when the container is stopped, the supporting processes > are also stopped. That is, the behavior with the current containers > is significantly different than on baremetal and stopping/restarting > containers effectively breaks the dataplane. At the moment this is > being considered a blocker and unless we can find a resolution, we > may need to recommend running the L3, DHCP and metadata agents on > baremetal. I didn't think the neutron metadata agent was affected but just the ovn-metadata agent? Or is there a problem with the UNIX domain sockets the haproxy instances use to connect to it when the container is restarted? > There's quite a bit to unpack here: are you suggesting that running > these services in HA configuration doesn't help either with the data > plane being gone after a stop/restart? Ultimately this boils down to > where the state is persisted, and while certain agents rely on > namespaces and processes whose ephemeral nature is hard to persist, > enough could be done to allow for a non-disruptive bumping of the afore > mentioned services. Armando - https://review.openstack.org/#/c/542858/ (if accepted) should help with dataplane downtime, as sharing the namespaces lets them persist, which eases what the agent has to configure on the restart of a container (think of what the l3-agent needs to create for 1000 routers). But it doesn't address dnsmasq being unavailable when the dhcp-agent container is restarted like it is today. Maybe one way around that is to run 2+ agents per network, but that still leaves a regression from how it works today. Even with l3-ha I'm not sure things are perfect, might wind-up with two masters sometimes. I've seen one suggestion of putting all these processes in their own container instead of the agent container so they continue to run, it just might be invasive to the neutron code. Maybe there is another option? -Brian From aj at suse.com Wed Feb 14 05:45:50 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 14 Feb 2018 06:45:50 +0100 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: References: <20180213150610.GA26600@localhost.localdomain> Message-ID: On 2018-02-14 01:28, Ghanshyam Mann wrote: > On Wed, Feb 14, 2018 at 12:06 AM, Paul Belanger wrote: >> On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: >>> Hi Infra Team, >>> >>> I have 1 quick question on zuulv3 jobs and their migration part. From >>> zuulv3 doc [1], it is clear about migrating the job definition and use >>> those among cross repo pipeline etc. >>> >>> But I did not find clear recommendation that whether project's >>> pipeline definition should stay in project-config or we should move >>> that to project side. >>> >>> IMO, >>> 'template' part(which has system level jobs) can stay in >>> project-config. For example below part- >>> >>> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 >>> >>> Other pipeline definition- 'check', 'gate', 'experimental' etc should >>> be move to project repo, mainly this list- >>> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 >>> >>> If we move those past as mentioned above then, we can have a >>> consolidated place to control the project pipeline for >>> 'irrelevant-files', specific branch etc >>> >>> ..1 https://docs.openstack.org/infra/manual/zuulv3.html >>> >> As it works today, pipeline stanza needs to be in a config project[1] >> (project-config) repo. So what you are suggestion will not work. This was done >> to allow zuul admins to control which pipelines are setup / configured. >> >> I am mostly curious why a project would need to modify a pipeline configuration >> or duplicate it into all projects, over having it central located in >> project-config. > > pipeline stanza and configuration stay in project-config. I mean list > of jobs defined in each pipeline for specific project for example > here[2]. Now we have list of jobs for each pipeline in 2 places, one > in project-config [2] and second in project repo[3]. > > Issue in having it in 2 places: > - No single place to check what all jobs project will run with what conditions > - If we need to modify the list of jobs in pipeline or change other > bits like irrelevant-files etc then it has to be done in > project-config. So no full control by project side. This should be explained in: https://docs.openstack.org/infra/manual/zuulv3.html#what-to-convert So, the standard templates/jobs - incl. PTI mandated ones - should stay in project-config, you can move everything else in-tree, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From harlowja at fastmail.com Wed Feb 14 06:05:56 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Tue, 13 Feb 2018 22:05:56 -0800 Subject: [openstack-dev] [automaton] How to extend automaton? In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E9624DBF4@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9624DBF4@ALA-MBD.corp.ad.wrs.com> Message-ID: <5A83D1C4.9070102@fastmail.com> As far a 1, I'd recommend just use functools.partial or make an object with all the extra stuff u want and have that object provide a __call__ method. As far as 2, you might have to subclass the FSM baseclass and add those into the internal data-structure (same for 3 I think); ie this one @ https://github.com/openstack/automaton/blob/master/automaton/machines.py#L186-L191 Of course feel free to do it differently and submit a patch that folks (myself and others) can review. -Josh Kwan, Louie wrote: > https://github.com/openstack/automaton > > Friendly state machines for python. > > A few questions about automaton. > > 1.I would like to know can we addition parameters on on_enter or on_exit > callbacks. Right now, it seems it only allows state and triggered_event. > > a.I have many FSM running for different objects and it is much easier if > I can pass on the some sort of ID back to the callbacks. > > 2.Can we or how can we store extra attribute like last state change > *timestamp*? > > 3.Can we store additional identify info for the FSM object? Would like > to add an */UUID/* > > Thanks. > > Louie > > def print_on_enter(new_state, triggered_event): > > print("Entered '%s' due to '%s'" % (new_state, triggered_event)) > > def print_on_exit(old_state, triggered_event): > > print("Exiting '%s' due to '%s'" % (old_state, triggered_event)) > > # This will contain all the states and transitions that our machine will > > # allow, the format is relatively simple and designed to be easy to use. > > state_space = [ > > { > > 'name': 'stopped', > > 'next_states': { > > # On event 'play' transition to the 'playing' state. > > 'play': 'playing', > > 'open_close': 'opened', > > 'stop': 'stopped', > > }, > > 'on_enter': print_on_enter, > > 'on_exit': print_on_exit, > > }, > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From liujiong at gohighsec.com Wed Feb 14 06:13:09 2018 From: liujiong at gohighsec.com (Jiong Liu) Date: Wed, 14 Feb 2018 14:13:09 +0800 Subject: [openstack-dev] [barbican] weekly meeting time Message-ID: <005101d3a55a$e6329270$b297b750$@gohighsec.com> Hi Ade, Thank you for proposing this change! I'm in China, and the second time slot works better for me. Regards, Jiong > Message: 35 > Date: Tue, 13 Feb 2018 10:17:59 -0500 > From: Ade Lee > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: [openstack-dev] [barbican] weekly meeting time > Message-ID: <1518535079.22990.9.camel at redhat.com> > Content-Type: text/plain; charset="UTF-8" > Hi all, > The Barbican weekly meeting has been fairly sparsely attended for a > little while now, and the most active contributors these days appear to > be in Asia. > Its time to consider moving the weekly meeting to a time when more > contributors can attend. I'm going to propose a couple times below to > start out. > 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) Tuesday > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday > Feel free to propose other days/times. > Thanks, > Ade > P.S. Until decided otherwise, the Barbican meeting remains on Mondays > at 2000 UTC From zhipengh512 at gmail.com Wed Feb 14 07:44:02 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 14 Feb 2018 15:44:02 +0800 Subject: [openstack-dev] [acceleration]Cancelation of Cyborg Team meeting on Feb 14th and 21th Message-ID: Hi Team, Due to Chinese new year and the approaching PTG, let's cancel this week and next week's team meeting. But as usual we will lurk around the IRC channel still, so if you got anything, feel free to shoot on #openstack-cyborg -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Wed Feb 14 08:33:29 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 14 Feb 2018 09:33:29 +0100 Subject: [openstack-dev] [publiccloud-wg] Reminder for todays meeting Message-ID: <4fab430c-0d04-8862-132a-6637b555a021@citynetwork.se> Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From balazs.gibizer at ericsson.com Wed Feb 14 09:21:08 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 14 Feb 2018 10:21:08 +0100 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: References: <20180213150610.GA26600@localhost.localdomain> Message-ID: <1518600068.18558.5@smtp.office365.com> On Wed, Feb 14, 2018 at 6:45 AM, Andreas Jaeger wrote: > On 2018-02-14 01:28, Ghanshyam Mann wrote: >> On Wed, Feb 14, 2018 at 12:06 AM, Paul Belanger >> wrote: >>> On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: >>>> Hi Infra Team, >>>> >>>> I have 1 quick question on zuulv3 jobs and their migration part. >>>> From >>>> zuulv3 doc [1], it is clear about migrating the job definition >>>> and use >>>> those among cross repo pipeline etc. >>>> >>>> But I did not find clear recommendation that whether project's >>>> pipeline definition should stay in project-config or we should >>>> move >>>> that to project side. >>>> >>>> IMO, >>>> 'template' part(which has system level jobs) can stay in >>>> project-config. For example below part- >>>> >>>> >>>> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 >>>> >>>> Other pipeline definition- 'check', 'gate', 'experimental' etc >>>> should >>>> be move to project repo, mainly this list- >>>> >>>> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 >>>> >>>> If we move those past as mentioned above then, we can have a >>>> consolidated place to control the project pipeline for >>>> 'irrelevant-files', specific branch etc >>>> >>>> ..1 https://docs.openstack.org/infra/manual/zuulv3.html >>>> >>> As it works today, pipeline stanza needs to be in a config >>> project[1] >>> (project-config) repo. So what you are suggestion will not work. >>> This was done >>> to allow zuul admins to control which pipelines are setup / >>> configured. >>> >>> I am mostly curious why a project would need to modify a pipeline >>> configuration >>> or duplicate it into all projects, over having it central located >>> in >>> project-config. >> >> pipeline stanza and configuration stay in project-config. I mean >> list >> of jobs defined in each pipeline for specific project for example >> here[2]. Now we have list of jobs for each pipeline in 2 places, one >> in project-config [2] and second in project repo[3]. >> >> Issue in having it in 2 places: >> - No single place to check what all jobs project will run with what >> conditions >> - If we need to modify the list of jobs in pipeline or change other >> bits like irrelevant-files etc then it has to be done in >> project-config. So no full control by project side. > For me it is even more than two places as the project templates like 'integarted-gate'[4] defines jobs to be executed on a project that includes the template in the project-config. Which leads to problems like [5]. This shows that tracking down why some job runs on a change is fairly non-trivial from a developer perspective. Therefore I support to define which jobs run on a given project as close to the project as possible and as small number of different places as possible. I even volunteer to help with the moving from nova perspective. > This should be explained in: > https://docs.openstack.org/infra/manual/zuulv3.html#what-to-convert > > So, the standard templates/jobs - incl. PTI mandated ones - should > stay > in project-config, you can move everything else in-tree, As far as I understand this list allows us to solve [5] by simply moving every jobs from 'integrated-gate' to the respective project in tree as the jobs in that template are not part of the PTI. [4] https://github.com/openstack-infra/openstack-zuul-jobs/blob/df8a8e8ee41c1ceb4da458a8681e39de39eafded/zuul.d/zuul-legacy-project-templates.yaml#L93 [5] https://review.openstack.org/#/c/538908 Cheers, gibi From renat.akhmerov at gmail.com Wed Feb 14 10:03:10 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 14 Feb 2018 17:03:10 +0700 Subject: [openstack-dev] [mistral][release][ffe] Requesting FFE for supporting source execution id in the Mistral client Message-ID: Hi, We were asked to do a FFE request to be able to release a new version of Mistral client out of stable/queens branch. The backport patch: https://review.openstack.org/#/c/543393/ The release patch: https://review.openstack.org/#/c/543402 The reason to do that after the feature freeze is that we didn’t backport (and release) this patch by mistake (simply missed it) whereas the corresponding functionality was already included on the server side and went to Queens-3 and subsequent releases. From my side I can assure that the change is backwards compatible and very much wanted in stable/queens by many users. Hence we’re kindly asking to approve the release patch. Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 14 10:17:19 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 14 Feb 2018 19:17:19 +0900 Subject: [openstack-dev] [infra] [all] project pipeline definition should stay in project-config or project side ? In-Reply-To: <1518600068.18558.5@smtp.office365.com> References: <20180213150610.GA26600@localhost.localdomain> <1518600068.18558.5@smtp.office365.com> Message-ID: On Wed, Feb 14, 2018 at 6:21 PM, Balázs Gibizer wrote: > > > On Wed, Feb 14, 2018 at 6:45 AM, Andreas Jaeger wrote: >> >> On 2018-02-14 01:28, Ghanshyam Mann wrote: >>> >>> On Wed, Feb 14, 2018 at 12:06 AM, Paul Belanger >>> wrote: >>>> >>>> On Tue, Feb 13, 2018 at 11:05:34PM +0900, gmann wrote: >>>>> >>>>> Hi Infra Team, >>>>> >>>>> I have 1 quick question on zuulv3 jobs and their migration part. From >>>>> zuulv3 doc [1], it is clear about migrating the job definition and use >>>>> those among cross repo pipeline etc. >>>>> >>>>> But I did not find clear recommendation that whether project's >>>>> pipeline definition should stay in project-config or we should move >>>>> that to project side. >>>>> >>>>> IMO, >>>>> 'template' part(which has system level jobs) can stay in >>>>> project-config. For example below part- >>>>> >>>>> >>>>> https://github.com/openstack-infra/project-config/blob/e2b82623a4ab60261b37a91e311118301927b9b6/zuul.d/projects.yaml#L10507-L10523 >>>>> >>>>> Other pipeline definition- 'check', 'gate', 'experimental' etc should >>>>> be move to project repo, mainly this list- >>>>> >>>>> https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10524-L11019 >>>>> >>>>> If we move those past as mentioned above then, we can have a >>>>> consolidated place to control the project pipeline for >>>>> 'irrelevant-files', specific branch etc >>>>> >>>>> ..1 https://docs.openstack.org/infra/manual/zuulv3.html >>>>> >>>> As it works today, pipeline stanza needs to be in a config project[1] >>>> (project-config) repo. So what you are suggestion will not work. This >>>> was done >>>> to allow zuul admins to control which pipelines are setup / configured. >>>> >>>> I am mostly curious why a project would need to modify a pipeline >>>> configuration >>>> or duplicate it into all projects, over having it central located in >>>> project-config. >>> >>> >>> pipeline stanza and configuration stay in project-config. I mean list >>> of jobs defined in each pipeline for specific project for example >>> here[2]. Now we have list of jobs for each pipeline in 2 places, one >>> in project-config [2] and second in project repo[3]. >>> >>> Issue in having it in 2 places: >>> - No single place to check what all jobs project will run with what >>> conditions >>> - If we need to modify the list of jobs in pipeline or change other >>> bits like irrelevant-files etc then it has to be done in >>> project-config. So no full control by project side. >> >> > > For me it is even more than two places as the project templates like > 'integarted-gate'[4] defines jobs to be executed on a project that includes > the template in the project-config. Which leads to problems like [5]. This > shows that tracking down why some job runs on a change is fairly non-trivial > from a developer perspective. Therefore I support to define which jobs run > on a given project as close to the project as possible and as small number > of different places as possible. I even volunteer to help with the moving > from nova perspective. > > >> This should be explained in: >> https://docs.openstack.org/infra/manual/zuulv3.html#what-to-convert >> >> So, the standard templates/jobs - incl. PTI mandated ones - should stay >> in project-config, you can move everything else in-tree, > > > As far as I understand this list allows us to solve [5] by simply moving > every jobs from 'integrated-gate' to the respective project in tree as the > jobs in that template are not part of the PTI. I agree on moving job out of 'integrated-gate''as it cannot define generic 'irrelevant files' for each project. if it define anything then it does not allow to override that. All other projects also have same issue like cinder does not want integrated-gate to run on cinder/test/* files. If moving integrated-gate to each project side then, is there any other use case of defining jobs in 'integrated-gate' ? if no then how about removing the concept of ''integrated-gate'' We had similar issue for Branch things also - https://review.openstack.org/#/c/542484/ -gmann > > > [4] > https://github.com/openstack-infra/openstack-zuul-jobs/blob/df8a8e8ee41c1ceb4da458a8681e39de39eafded/zuul.d/zuul-legacy-project-templates.yaml#L93 > [5] https://review.openstack.org/#/c/538908 > > Cheers, > gibi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dougal at redhat.com Wed Feb 14 10:20:50 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 14 Feb 2018 10:20:50 +0000 Subject: [openstack-dev] [mistral][release][ffe] Requesting FFE for supporting source execution id in the Mistral client In-Reply-To: References: Message-ID: On 14 February 2018 at 10:03, Renat Akhmerov wrote: > Hi, > > We were asked to do a FFE request to be able to release a new version of > Mistral client out of stable/queens branch. > > The backport patch: https://review.openstack.org/#/c/543393/ > The release patch: https://review.openstack.org/#/c/543402 > > The reason to do that after the feature freeze is that we didn’t backport > (and release) this patch by mistake (simply missed it) whereas the > corresponding functionality was already included on the server side and > went to Queens-3 and subsequent releases. > > From my side I can assure that the change is backwards compatible and very > much wanted in stable/queens by many users. > Thanks Renat, I agree. This should be safe from a compatibility point of view and simple an oversight. Missing this was an error on our part, we will try to be more organised for future releases. Hence we’re kindly asking to approve the release patch. > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Feb 14 10:42:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 14 Feb 2018 11:42:16 +0100 Subject: [openstack-dev] [all][infra] PTG Infra Helproom Info and Signup In-Reply-To: <1518561402.929387.1269899112.764A7EF6@webmail.messagingengine.com> References: <1518561402.929387.1269899112.764A7EF6@webmail.messagingengine.com> Message-ID: Clark Boylan wrote: > Last PTG the infra helproom seemed to work out for projects that knew about it. The biggest problem seemed to be that other projects either just weren't aware that there is/was an Infra helproom or didn't know when an appropriate time to show up would be. We are going to try a couple things this time around to try and address those issues. > > First of all the Infra team is hosting a helproom at the Dublin PTG. Now you should all know :) The idea is that if projects or individuals have questions for the infra team or problems that we can help you with there is time set aside specifically for this. I'm not sure what room we will be in, you will have to look at the map, but we have the entirety of Monday and Tuesday set aside for this. Also worth noting that it is a "project infrastructure" helproom, in the largest sense. It goes beyond the "Infra" team: you can bring any question around project support from horizontal support teams like QA, release management, requirements, stable team... -- Thierry Carrez (ttx) From dalvarez at redhat.com Wed Feb 14 10:58:56 2018 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Wed, 14 Feb 2018 11:58:56 +0100 Subject: [openstack-dev] [tripleo] [neutron] Current containerized neutron agents introduce a significant regression in the dataplane In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 5:40 AM, Brian Haley wrote: > On 02/13/2018 05:08 PM, Armando M. wrote: > >> >> >> On 13 February 2018 at 14:02, Brent Eagles > beagles at redhat.com>> wrote: >> >> Hi, >> >> The neutron agents are implemented in such a way that key >> functionality is implemented in terms of haproxy, dnsmasq, >> keepalived and radvd configuration. The agents manage instances of >> these services but, by design, the parent is the top-most (pid 1). >> >> On baremetal this has the advantage that, while control plane >> changes cannot be made while the agents are not available, the >> configuration at the time the agents were stopped will work (for >> example, VMs that are restarted can request their IPs, etc). In >> short, the dataplane is not affected by shutting down the agents. >> >> In the TripleO containerized version of these agents, the supporting >> processes (haproxy, dnsmasq, etc.) are run within the agent's >> container so when the container is stopped, the supporting processes >> are also stopped. That is, the behavior with the current containers >> is significantly different than on baremetal and stopping/restarting >> containers effectively breaks the dataplane. At the moment this is >> being considered a blocker and unless we can find a resolution, we >> may need to recommend running the L3, DHCP and metadata agents on >> baremetal. >> > > I didn't think the neutron metadata agent was affected but just the > ovn-metadata agent? Or is there a problem with the UNIX domain sockets the > haproxy instances use to connect to it when the container is restarted? That's right. In ovn-metadata-agent we spawn haproxy inside the q-ovnmeta namespace and this is where we'll find a problem if the process goes away. As you said, neutron metadata agent is basically receiving the proxied requests from haproxies residing in either q-router or q-dhcp namespaces on its UNIX socket and sending them to Nova. > > > There's quite a bit to unpack here: are you suggesting that running these >> services in HA configuration doesn't help either with the data plane being >> gone after a stop/restart? Ultimately this boils down to where the state is >> persisted, and while certain agents rely on namespaces and processes whose >> ephemeral nature is hard to persist, enough could be done to allow for a >> non-disruptive bumping of the afore mentioned services. >> > > Armando - https://review.openstack.org/#/c/542858/ (if accepted) should > help with dataplane downtime, as sharing the namespaces lets them persist, > which eases what the agent has to configure on the restart of a container > (think of what the l3-agent needs to create for 1000 routers). > > But it doesn't address dnsmasq being unavailable when the dhcp-agent > container is restarted like it is today. Maybe one way around that is to > run 2+ agents per network, but that still leaves a regression from how it > works today. Even with l3-ha I'm not sure things are perfect, might > wind-up with two masters sometimes. > > I've seen one suggestion of putting all these processes in their own > container instead of the agent container so they continue to run, it just > might be invasive to the neutron code. Maybe there is another option? I had some idea based on that one to reduce the impact on neutron code and its dependency on containers. Basically, we would be running dnsmasq, haproxy, keepalived, radvd, etc in separate containers (it makes sense as they have independent lifecycles) and we would drive those through the docker socket from neutron agents. In order to reduce this dependency, I thought of having some sort of 'rootwrap-daemon-docker' which takes the commands and checks if it has to spawn the process in a separate container (for example, iptables wouldn't be the case) and if so, it'll use the docker socket to do it. We'll also have to monitor the PID files on those containers to respawn them in case they die. IMHO, this is far from the containers philosophy since we're using host networking, privileged access, sharing namespaces, relying on 'sidecar' containers... but I can't think of a better way to do it. > > -Brian > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From saverio.proto at switch.ch Wed Feb 14 11:07:55 2018 From: saverio.proto at switch.ch (Saverio Proto) Date: Wed, 14 Feb 2018 12:07:55 +0100 Subject: [openstack-dev] [horizon] collectstatic with custom theme is broken at least since Ocata In-Reply-To: <38631337-B70D-488D-A9E5-F59693EEE942@cern.ch> References: <38631337-B70D-488D-A9E5-F59693EEE942@cern.ch> Message-ID: <8bfb4509-1f1c-8932-c6ba-76c21f075742@switch.ch> Hello Mateusz, thanks for your input. I just want to confirm that a patch was merged in master and backported all the way back to Ocata to fix the bug. details here: https://bugs.launchpad.net/horizon/+bug/1744239 thank you Saverio On 05.02.18 14:54, Mateusz Kowalski wrote: > Hi, > > We are running Horizon in Pike and cannot confirm having the same problem as you describe. We are using a custom theme however the folder structure is a bit different than the one you presented in the bug report. > In our case we have > > - /usr/share/openstack-dashboard/openstack_dashboard/themes > |-- cern > |-- default > |-- material > > what means we do not modify at all files inside "default". Let me know if you want to compare more deeply our changes to see where the problem comes from, as for us "theme_file.split('/templates/')" does not cause the trouble. > > Cheers, > Mateusz > >> On 5 Feb 2018, at 14:44, Saverio Proto wrote: >> >> Hello, >> >> I have tried to find a fix to this: >> >> https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/ >> https://bugs.launchpad.net/horizon/+bug/1744239 >> https://review.openstack.org/#/c/536039/ >> >> I am upgrading from Newton to Pike. >> >> Here the real question is: how is it possible that this bug was found so >> late ??? >> >> There is at least another operator that documented hitting this bug in >> Ocata. >> >> Probably this bug went unnoticed because you hit it only if you have >> customizations for Horizon. All the automatic testing does not notice >> this bug. >> >> What I cannot undestand is. >> - are we two operators hitting a corner case ? >> - No one else uses Horizon with custom themes in production with >> version newer than Newton ? >> >> This is all food for your brainstorming about LTS,bugfix branches, >> release cycle changes.... >> >> Cheers, >> >> Saverio >> >> >> -- >> SWITCH >> Saverio Proto, Peta Solutions >> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland >> phone +41 44 268 15 15, direct +41 44 268 1573 >> saverio.proto at switch.ch, http://www.switch.ch >> >> http://www.switch.ch/stories >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- SWITCH Saverio Proto, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 1573 saverio.proto at switch.ch, http://www.switch.ch http://www.switch.ch/stories From liam.young at canonical.com Wed Feb 14 11:29:24 2018 From: liam.young at canonical.com (Liam Young) Date: Wed, 14 Feb 2018 11:29:24 +0000 Subject: [openstack-dev] [charms] Message-ID: Hi, I would like to propose that we do not support the notifications method for automatically creating DNS records in Queens+. This method for achieving Neutron integration has been superseded both upstream and in the charms. By removing support for it in Queens we prevent the charm from attempting to make designate v1 api calls for Queens+ which is a positive thing given it will have been removed ( https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues ). Thanks Liam From james.page at canonical.com Wed Feb 14 11:38:59 2018 From: james.page at canonical.com (James Page) Date: Wed, 14 Feb 2018 11:38:59 +0000 Subject: [openstack-dev] [charms] In-Reply-To: References: Message-ID: +1 On Wed, 14 Feb 2018 at 11:29 Liam Young wrote: > Hi, > > I would like to propose that we do not support the notifications > method for automatically creating DNS records in Queens+. This method > for achieving Neutron integration has been superseded both upstream > and in the charms. By removing support for it in Queens we prevent the > charm from attempting to make designate v1 api calls for Queens+ which > is a positive thing given it will have been removed ( > > https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues > ). > > Thanks > Liam > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Wed Feb 14 11:39:38 2018 From: james.page at ubuntu.com (James Page) Date: Wed, 14 Feb 2018 11:39:38 +0000 Subject: [openstack-dev] [charms] In-Reply-To: References: Message-ID: +1 On Wed, 14 Feb 2018 at 11:29 Liam Young wrote: > Hi, > > I would like to propose that we do not support the notifications > method for automatically creating DNS records in Queens+. This method > for achieving Neutron integration has been superseded both upstream > and in the charms. By removing support for it in Queens we prevent the > charm from attempting to make designate v1 api calls for Queens+ which > is a positive thing given it will have been removed ( > > https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues > ). > > Thanks > Liam > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Wed Feb 14 12:35:11 2018 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Wed, 14 Feb 2018 12:35:11 +0000 Subject: [openstack-dev] [charms] In-Reply-To: References: Message-ID: Yes, that seems like a reasonable approach. +1 On Wed, Feb 14, 2018 at 11:29 AM, Liam Young wrote: > Hi, > > I would like to propose that we do not support the notifications > method for automatically creating DNS records in Queens+. This method > for achieving Neutron integration has been superseded both upstream > and in the charms. By removing support for it in Queens we prevent the > charm from attempting to make designate v1 api calls for Queens+ which > is a positive thing given it will have been removed ( > https://docs.openstack.org/releasenotes/designate/queens. > html#critical-issues > ). > > Thanks > Liam > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Alex Kavanagh - Software Engineer Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Wed Feb 14 13:01:12 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 14 Feb 2018 14:01:12 +0100 Subject: [openstack-dev] [tripleo] [neutron] Current containerized neutron agents introduce a significant regression in the dataplane In-Reply-To: References: Message-ID: <6247d617-4a87-eb1c-eb0d-6af76e5ee4ce@redhat.com> On 2/14/18 11:58 AM, Daniel Alvarez Sanchez wrote: > > > On Wed, Feb 14, 2018 at 5:40 AM, Brian Haley > wrote: > > On 02/13/2018 05:08 PM, Armando M. wrote: > > > > On 13 February 2018 at 14:02, Brent Eagles >> wrote: > >     Hi, > >     The neutron agents are implemented in such a way that key >     functionality is implemented in terms of haproxy, dnsmasq, >     keepalived and radvd configuration. The agents manage > instances of >     these services but, by design, the parent is the top-most > (pid 1). > >     On baremetal this has the advantage that, while control plane >     changes cannot be made while the agents are not available, the >     configuration at the time the agents were stopped will work > (for >     example, VMs that are restarted can request their IPs, etc). In >     short, the dataplane is not affected by shutting down the > agents. > >     In the TripleO containerized version of these agents, the > supporting >     processes (haproxy, dnsmasq, etc.) are run within the agent's >     container so when the container is stopped, the supporting > processes >     are also stopped. That is, the behavior with the current > containers >     is significantly different than on baremetal and > stopping/restarting >     containers effectively breaks the dataplane. At the moment > this is >     being considered a blocker and unless we can find a > resolution, we >     may need to recommend running the L3, DHCP and metadata > agents on >     baremetal. > > > I didn't think the neutron metadata agent was affected but just the > ovn-metadata agent?  Or is there a problem with the UNIX domain > sockets the haproxy instances use to connect to it when the > container is restarted? > > > That's right. In ovn-metadata-agent we spawn haproxy inside the > q-ovnmeta namespace > and this is where we'll find a problem if the process goes away. As you > said, neutron > metadata agent is basically receiving the proxied requests from > haproxies residing > in either q-router or q-dhcp namespaces on its UNIX socket and sending > them to Nova. > > > > There's quite a bit to unpack here: are you suggesting that > running these services in HA configuration doesn't help either > with the data plane being gone after a stop/restart? Ultimately > this boils down to where the state is persisted, and while > certain agents rely on namespaces and processes whose ephemeral > nature is hard to persist, enough could be done to allow for a > non-disruptive bumping of the afore mentioned services. > > > Armando - https://review.openstack.org/#/c/542858/ > (if accepted) should help > with dataplane downtime, as sharing the namespaces lets them > persist, which eases what the agent has to configure on the restart > of a container (think of what the l3-agent needs to create for 1000 > routers). > > But it doesn't address dnsmasq being unavailable when the dhcp-agent > container is restarted like it is today.  Maybe one way around that > is to run 2+ agents per network, but that still leaves a regression > from how it works today.  Even with l3-ha I'm not sure things are > perfect, might wind-up with two masters sometimes. > > I've seen one suggestion of putting all these processes in their own > container instead of the agent container so they continue to run, it > just might be invasive to the neutron code.  Maybe there is another > option? > > > I had some idea based on that one to reduce the impact on neutron code > and its dependency on > containers. Basically, we would be running dnsmasq, haproxy, keepalived, > radvd, etc > in separate containers (it makes sense as they have independent > lifecycles) and we would drive +1 for that separation > those through the docker socket from neutron agents. In order to reduce > this dependency, I > thought of having some sort of 'rootwrap-daemon-docker' which takes the Let's please avoid using 'docker' in names, could it be rootwrap-cri or rootwrap-engine-moby or something? > commands and > checks if it has to spawn the process in a separate container (for > example, iptables wouldn't > be the case) and if so, it'll use the docker socket to do it. > We'll also have to monitor the PID files on those containers to respawn > them in case they > die. > > IMHO, this is far from the containers philosophy since we're using host > networking, > privileged access, sharing namespaces, relying on 'sidecar' > containers... but I can't think of > a better way to do it. This still looks fitting well into the k8s pods concept [0], with healthchecks and shared namespaces and logical coupling of sidecars, which is the agents and helping daemons running in namespaces. I hope it does. [0] https://kubernetes.io/docs/concepts/workloads/pods/pod/ > > > > -Brian > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From lbragstad at gmail.com Wed Feb 14 14:26:44 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 14 Feb 2018 08:26:44 -0600 Subject: [openstack-dev] [keystone] [policy] policy meeting update Message-ID: Last week during the policy meeting we questioned whether or not we needed weekly meetings anymore [0], especially since the agenda has been relatively sparse recently. Let's hold off on policy meetings until we get to the PTG. There, we're going to be discussing RBAC and all that fun stuff anyway, so we can revisit the weekly meeting and use it if needed. Thanks, Lance [0] http://eavesdrop.openstack.org/meetings/policy/2018/policy.2018-02-07-16.00.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Wed Feb 14 14:50:04 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 14 Feb 2018 14:50:04 +0000 Subject: [openstack-dev] [all][infra] PTG Infra Helproom Info and Signup In-Reply-To: References: <1518561402.929387.1269899112.764A7EF6@webmail.messagingengine.com> Message-ID: On Wed, Feb 14, 2018 at 10:42 AM Thierry Carrez wrote: > Clark Boylan wrote: > > Last PTG the infra helproom seemed to work out for projects that knew > about it. The biggest problem seemed to be that other projects either just > weren't aware that there is/was an Infra helproom or didn't know when an > appropriate time to show up would be. We are going to try a couple things > this time around to try and address those issues. > > > > First of all the Infra team is hosting a helproom at the Dublin PTG. Now > you should all know :) The idea is that if projects or individuals have > questions for the infra team or problems that we can help you with there is > time set aside specifically for this. I'm not sure what room we will be in, > you will have to look at the map, but we have the entirety of Monday and > Tuesday set aside for this. > > Also worth noting that it is a "project infrastructure" helproom, in the > largest sense. It goes beyond the "Infra" team: you can bring any > question around project support from horizontal support teams like QA, > Indeed, thanks for pointing that out. A lot of us from the QA team will be in Dublin, available during the help ours for questions or topics you may want to discuss. There's usually enough time to sit down and hack a few things on the spot... and there are enough infra/qa cores around to get things reviewed and merged during the week. On the QA side we don't have an ethercalc (yet?) but if there are topics you would like to discuss / develop please add something to the etherpad. Andrea Frittoli (andreaf) [1] https://etherpad.openstack.org/p/qa-rocky-ptg > release management, requirements, stable team... > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Wed Feb 14 14:57:29 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 14 Feb 2018 14:57:29 +0000 Subject: [openstack-dev] [designate] V1 API is now fully removed Message-ID: <750a3ea9-87ea-cb57-b8f6-90ca50e55952@ham.ie> I saw [1] and realised that we should be more explicit about the upcoming release. As highlighted in [2], this email is a reminder that the long awaited removal of the DNS V1 API is now complete with [3]. This means from Queens onwards it will not be possible to re-enable the V1 API (we have had it off by default for a long period of time). Horizon, Heat and the OpenStack CLI all have v2 usable resources, and have been deprecating the v1 resources for some time. Any deployment tooling, custom dashboards, and internal tools should all ensure they do not require the v1 API, and do not try to enable it. - Graham 1 - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127366.html 2 - https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues 3 - https://github.com/openstack/designate/commit/c318106c01b2b3976049f2c3ba0c8502a874242b -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From andrea.frittoli at gmail.com Wed Feb 14 15:01:27 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 14 Feb 2018 15:01:27 +0000 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <1516975504.9811.5@smtp.office365.com> References: <1516975504.9811.5@smtp.office365.com> Message-ID: On Fri, Jan 26, 2018 at 2:05 PM Balázs Gibizer wrote: > Hi, > > I'm getting more and more confused how the zuul job hierarchy works or is > supposed to work. > > First there was a bug in nova that some functional tests are not triggered > although the job (re-)definition in the nova part of the project-config > should not prevent it to run [1]. > > There we figured out that irrelevant-files parameter of the jobs are not > something that can be overriden during re-definition or through > parent-child relationship. The base job openstack-tox-functional has an > irrelevant-files attribute that lists '^doc/.*$' as a path to be ignored > [2]. In the other hand the nova part of the project-config tries to make > this ignore less broad by adding only '^doc/source/.*$' . This does not > work as we expected and the job did not run on changes that only affected > ./doc/notification_samples path. We are fixing it by defining our own > functional job in nova tree [4]. > > [1] https://bugs.launchpad.net/nova/+bug/1742962 > [2] > https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 > [3] > https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 > [4] https://review.openstack.org/#/c/533210/ > > Then I started looking into other jobs to see if we made similar mistakes. > I found two other examples in the nova related jobs where redefining the > irrelevant-files of a job caused problems. In these examples nova tried to > ignore more paths during the override than what was originally ignored in > the job definition but that did not work [5][6]. > > [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) > [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) > > So far the problem seemed to be consistent (i.e. override does not work). > But then I looked into neutron-grenade-multinode. That job is defined in > neutron tree (like neutron-grenade) > That is wrong and it should not have happened. Grenade jobs that are shared by all the repos in the integrated gate should live in repos owned by QA/infra - it was never the plan for them to end up in the neutron repo. We're working on making grenade and multinode zuulv3 native jobs. Grenade jobs will leave in the grenade repo, once they're ready we will remove the legacy one from neutron side and add the new ones defined in grenade. Changes will be done to the job template accordingly which means that for teams that are consuming those jobs via the template only there'll be no action. Andrea Frittoli (andreaf) > but nova also refers to it in nova section of the project-config with > different irrelevant-files than their original definition. So I assumed > that this will lead to similar problem than in case of neutron-grenade, but > it doesn't. > > The neutron-grenade-multinode original definition [1] does not try to > ignore the 'nova/tests' path but the nova side of the definition in the > project config does try to ignore that path [8]. Interestingly a patch in > nova that only changes under the path: nova/tests/ does not trigger the job > [9]. So in this case overriding the irrelevant-files of a job works. (It > seems that overriding neutron-tempest-linuxbridge irrelevant-files works > too). > > [7] > https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 > [8] > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 > [9] https://review.openstack.org/#/c/537936/ > > I don't see what is the difference between neutron-grenade and > neutron-grenade-multinode jobs definitions from this perspective but it > seems that the irrelevent-files attribute behaves inconsistently in these > two jobs. Could you please help me undestand how irrelevant-files in > overriden jobs supposed to work? > > cheers, > gibi > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Feb 14 15:07:57 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 14 Feb 2018 16:07:57 +0100 Subject: [openstack-dev] [os-upstream-institute][ptg] Working sessions at the PTG Message-ID: <94B2EDF3-79E1-4BED-A8D0-5E28906EC5A0@gmail.com> Hi, As discussed earlier we would like to leverage the opportunity to face to face working sessions at the PTG with the training team and anyone who’s interested to join/help out. We have one dedicated time slot to meet and summarize our activities during the week and list of action points which is __Thursday 4pm-6pm__ local time. I will announce the room for the discussion in a follow up email. To extend the time to work together in smaller groups we came up with the idea of a fixed work space, rather than a list of fixed time slots. The idea is to have a table dedicated to our team in the venue where anyone who’s not occupied can go and meet up with others to work on tasks we need to finish before the next training in Vancouver. Based on an earlier check the expectation is to have people available from Tuesday afternoon. You can use the IRC channel and the PTG bot to inform others about your availability to further encourage team work. The topics[1] for these working sessions are mainly focused on restructuring the training flow and improve the exercises. For discussions on the Contributor Guide content please check the Docs sessions[2], we will cover all related topics there including moving the lectures over from the training guides repository. With that said we will also need to look into the translation aspects of the planned changes to keep supporting the broad community who relies on our materials. Last but not least please also try to attend the First Contact SIG discussions[3] as they are relevant to the training activities as well and can affect how we structure and execute our courses in the future. I will update you with details as we are getting closer to the PTG. Please let me know if you have any questions or comments. Thanks and Best Regards, Ildikó [1] https://etherpad.openstack.org/p/OUI-Rocky-PTG [2] https://etherpad.openstack.org/p/docs-i18n-ptg-rocky [3] https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG From andrea.frittoli at gmail.com Wed Feb 14 15:13:30 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 14 Feb 2018 15:13:30 +0000 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <874ln8v6ye.fsf@meyer.lemoncheese.net> References: <1516975504.9811.5@smtp.office365.com> <874ln8v6ye.fsf@meyer.lemoncheese.net> Message-ID: On Fri, Jan 26, 2018 at 5:57 PM James E. Blair wrote: > Balázs Gibizer writes: > > > Hi, > > > > I'm getting more and more confused how the zuul job hierarchy works or > > is supposed to work. > > Hi! > > First, you (or others) may or may not have seen this already -- some of > it didn't exist when we first rolled out v3, and some of it has changed > -- but here are the relevant bits of the documentation that should help > explain what's going on. It helps to understand freezing: > > https://docs.openstack.org/infra/zuul/user/config.html#job > > and matching: > > https://docs.openstack.org/infra/zuul/user/config.html#matchers > > > First there was a bug in nova that some functional tests are not > > triggered although the job (re-)definition in the nova part of the > > project-config should not prevent it to run [1]. > > > > There we figured out that irrelevant-files parameter of the jobs are > > not something that can be overriden during re-definition or through > > parent-child relationship. The base job openstack-tox-functional has > > an irrelevant-files attribute that lists '^doc/.*$' as a path to be > > ignored [2]. In the other hand the nova part of the project-config > > tries to make this ignore less broad by adding only '^doc/source/.*$' > > . This does not work as we expected and the job did not run on changes > > that only affected ./doc/notification_samples path. We are fixing it > > by defining our own functional job in nova tree [4]. > > > > [1] https://bugs.launchpad.net/nova/+bug/1742962 > > [2] > > > https://github.com/openstack-infra/openstack-zuul-jobs/blob/1823e3ea20e6dfaf37786a6ff79c56cb786bf12c/zuul.d/jobs.yaml#L380 > > [3] > > > https://github.com/openstack-infra/project-config/blob/1145ab1293f5fa4d34c026856403c22b091e673c/zuul.d/projects.yaml#L10509 > > [4] https://review.openstack.org/#/c/533210/ > > This is correct. The issue here is that the irrelevant-files definition > on openstack-tox-functional is too broad. We need to be *extremely* > careful applying matchers to jobs like that. Generally I think that > irrelevant-files should be reserved for the project-pipeline invocations > only. That's how they were effectively used in Zuul v2, after all. > > Essentially, when someone puts an irrelevant-files section on a job like > that, they are saying "this job will never apply to these files, ever." > That's clearly not correct in this case. > > So our solutions are to acknowledge that it's over-broad, and reduce or > eliminate the list in [2] and expand it elsewhere (as in [3]). Or we > can say "we were generally correct, but nova is extra special so it > needs its own job". If that's the choice, then I think [4] is a fine > solution. > > > Then I started looking into other jobs to see if we made similar > > mistakes. I found two other examples in the nova related jobs where > > redefining the irrelevant-files of a job caused problems. In these > > examples nova tried to ignore more paths during the override than what > > was originally ignored in the job definition but that did not work > > [5][6]. > > > > [5] https://bugs.launchpad.net/nova/+bug/1745405 (temptest-full) > > As noted in that bug, the tempest-full job is invoked on nova via this > stanza: > > > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10674-L10688 > > As expected, that did not match. There is a second invocation of > tempest-full on nova here: > > > http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n126 I guess the line number changed since so this has moved to L101 [1] now :). tempest-full is part of the integrated-gate, so all projects in it run it through there. [1] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-project-templates.yaml#n101 > > > That has no irrelevant-files matches, and so matches everything. If you > drop the use of that template, it will work as expected. Or, if you can > say with some certainty that nova's irrelevant-files set is not > over-broad, you could move the irrelevant-files from nova's invocation > into the template, or even the job, and drop nova's individual > invocation. > > I don't think projects in the integrated gate should remove themselves from the template, it really helps keeping consistency. The pattern I've seen is that most projects repeat the same list of irrelevant files over and over again in all of their integration tests, It would be handy in future to be able to set irrelevant-files on a template when it's consumed. So we could have shared irrelevant files defined in the template, and custom ones added by each project when consuming the template. I don't this is is possible today. Does it sound like a reasonable feature request? Andrea Frittoli (andreaf) > > [6] https://bugs.launchpad.net/nova/+bug/1745431 (neutron-grenade) > > The same template invokes this job as well. > > > So far the problem seemed to be consistent (i.e. override does not > > work). But then I looked into neutron-grenade-multinode. That job is > > defined in neutron tree (like neutron-grenade) but nova also refers to > > it in nova section of the project-config with different > > irrelevant-files than their original definition. So I assumed that > > this will lead to similar problem than in case of neutron-grenade, but > > it doesn't. > > > > The neutron-grenade-multinode original definition [1] does not try to > > ignore the 'nova/tests' path but the nova side of the definition in > > the project config does try to ignore that path [8]. Interestingly a > > patch in nova that only changes under the path: nova/tests/ does not > > trigger the job [9]. So in this case overriding the irrelevant-files > > of a job works. (It seems that overriding neutron-tempest-linuxbridge > > irrelevant-files works too). > > > > [7] > > > https://github.com/openstack/neutron/blob/7e3d6a18fb928bcd303a44c1736d0d6ca9c7f0ab/.zuul.yaml#L140-L159 > > [8] > > > https://github.com/openstack-infra/project-config/blob/5ddbd62a46e17dd2fdee07bec32aa65e3b637ff3/zuul.d/projects.yaml#L10516-L10530 > > [9] https://review.openstack.org/#/c/537936/ > > > > I don't see what is the difference between neutron-grenade and > > neutron-grenade-multinode jobs definitions from this perspective but > > it seems that the irrelevent-files attribute behaves inconsistently > > in these two jobs. Could you please help me undestand how > > irrelevant-files in overriden jobs supposed to work? > > These jobs only have the one invocation -- on the nova project -- and > are not added via a template. > > Hopefully that explains the difference. > > Basically, the irrelevant-files on at least one project-pipeline > invocation of a job have to match, as well as at least one definition of > the job. So if both things have irrelevant-files, then it's effectively > a union of the two. > > I used a tool to help verify some of the information in this message, > especially the bugs [5] and [6]. You can ask Zuul to output debug > information about its job selection if you're dealing with confusing > situations like this. I went ahead and pushed a new patchset to your > test change to demonstrate how: > > https://review.openstack.org/537936 > > When it finishes running all the tests (in a few hours), it should > include in its report debug information about the decision-making > process for the jobs it ran. It outputs similar information into the > debug logs; so that we don't have to wait for it to see what it looks > like here is that copy: > > http://paste.openstack.org/show/653729/ > > The relevant lines for [5] are: > > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant tempest-full branches: None source: > openstack-infra/openstack-zuul-jobs/zuul.d/zuul-legacy-project-templates.yaml at master#126> > matched > 2018-01-26 13:07:53,560 DEBUG zuul.layout: Pipeline variant tempest-full branches: None source: > openstack-infra/project-config/zuul.d/projects.yaml at master#10485> did not > match > > Note the project-file-branch-line-number references are especially > helpful. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.decandia at gmail.com Wed Feb 14 15:35:29 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Wed, 14 Feb 2018 09:35:29 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Luke, Omer (in CC) has confirmed that he can stand in for me if needed, but my preference would be that you conference me in. If you won't know until the very day whether conference equipment is available, that's fine, we can figure it out last minute. A projector will be useful either way. thanks! Pino On Mon, Feb 12, 2018 at 2:45 AM, Luke Hinds wrote: > > > On Sun, Feb 11, 2018 at 4:01 PM, Pino de Candia < > giuseppe.decandia at gmail.com> wrote: > >> I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it >> from the slides. >> > > Thanks Pino , i added these to the agenda: > > https://etherpad.openstack.org/p/security-ptg-rocky > > Please let me know before the PTG, if it will be your colleague or if we > need to find a projector to conference you in. > > >> On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> Hi Folks, >>> >>> here are the slides for the Tatu presentation: https://docs.goo >>> gle.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM >>> >>> I meant to record the demo video as well but I haven't gotten around to >>> editing all the bits. Please stay tuned. >>> >>> thanks, >>> Pino >>> >>> >>> On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < >>> giuseppe.decandia at gmail.com> wrote: >>> >>>> Hi Luke, >>>> >>>> Fantastic! An hour would be great if the schedule allows - there are >>>> lots of different aspects we can dive into and potential future directions >>>> the project can take. >>>> >>>> thanks! >>>> Pino >>>> >>>> >>>> >>>> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >>>> >>>>> >>>>> >>>>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>>>> giuseppe.decandia at gmail.com> wrote: >>>>> >>>>>> Hi Folks, >>>>>> >>>>>> I know the request is very late, but I wasn't aware of this SIG until >>>>>> recently. Would it be possible to present a new project to the Security SIG >>>>>> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >>>>>> project, sign on users and contributors and get feedback. >>>>>> >>>>>> For the past few months I have been working on a new project - Tatu >>>>>> [1]- to automate the management of SSH certificates (for both users and >>>>>> hosts) in OpenStack. Tatu allows users to generate SSH certificates with >>>>>> principals based on their Project role assignments, and VMs automatically >>>>>> set up their SSH host certificate (and related config) via Nova vendor >>>>>> data. The project also manages bastions and DNS entries so that users don't >>>>>> have to assign Floating IPs for SSH nor remember IP addresses. >>>>>> >>>>>> I have a working demo (including Horizon panels [2] and OpenStack CLI >>>>>> [3]), but am still working on the devstack script and patches [4] to get >>>>>> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >>>>>> demo video in the next few days. >>>>>> >>>>>> best regards, >>>>>> Pino >>>>>> >>>>>> >>>>>> References: >>>>>> >>>>>> 1. https://github.com/pinodeca/tatu (Please note this is still >>>>>> very much a work in progress, lots of TODOs in the code, very little >>>>>> testing and documentation doesn't reflect the latest design). >>>>>> 2. https://github.com/pinodeca/tatu-dashboard >>>>>> 3. https://github.com/pinodeca/python-tatuclient >>>>>> 4. https://review.openstack.org/#/q/tatu >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Hi Giuseppe, of course you can! I will add you to the agenda. We could >>>>> get your an hour if it allows more time for presenting and post discussion? >>>>> >>>>> We will be meeting in an allocated room on Monday (details to follow). >>>>> >>>>> https://etherpad.openstack.org/p/security-ptg-rocky >>>>> >>>>> Luke >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young >>>>>>> wrote: >>>>>>> >>>>>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>>>>> Service catalog. >>>>>>>> >>>>>>> >>>>>>> Thanks Adam, will add it to the list. I see it's been open since >>>>>>> 2012! >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>>>>> >>>>>>>>> Are there any projects (new and old) that would like to make use >>>>>>>>> of the security SIG for either gaining another perspective on security >>>>>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>>>>> collaboration? >>>>>>>>> >>>>>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hello All, >>>>>>>>>> >>>>>>>>>> I am seeking topics for the PTG from all projects, as this will >>>>>>>>>> be where we try out are new form of being a SIG. >>>>>>>>>> >>>>>>>>>> For this PTG, we hope to facilitate more cross project >>>>>>>>>> collaboration topics now that we are a SIG, so if your project has a >>>>>>>>>> security need / problem / proposal than please do use the security SIG room >>>>>>>>>> where a larger audience may be present to help solve problems and gain >>>>>>>>>> x-project consensus. >>>>>>>>>> >>>>>>>>>> Please see our PTG planning pad [0] where I encourage you to add >>>>>>>>>> to the topics. >>>>>>>>>> >>>>>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Luke Hinds >>>>>>>>>> Security Project PTL >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ____________________________________________________________ >>>>>>>>> ______________ >>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>>> enstack.org?subject:unsubscribe >>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> ____________________________________________________________ >>>>>>>> ______________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>> enstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>> >>>> >>>> >>> >> > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Feb 14 15:55:03 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 14 Feb 2018 09:55:03 -0600 Subject: [openstack-dev] [mistral][release][ffe] Requesting FFE for supporting source execution id in the Mistral client In-Reply-To: References: Message-ID: <20180214155503.nogxlhigghgs3ijp@gentoo.org> On 18-02-14 17:03:10, Renat Akhmerov wrote: > Hi, > > We were asked to do a FFE request to be able to release a new version of Mistral client out of stable/queens branch. > > The backport patch: https://review.openstack.org/#/c/543393/ > The release patch: https://review.openstack.org/#/c/543402 > > The reason to do that after the feature freeze is that we didn’t backport (and release) this patch by mistake (simply missed it) whereas the corresponding functionality was already included on the server side and went to Queens-3 and subsequent releases. > > From my side I can assure that the change is backwards compatible and very much wanted in stable/queens by many users. > > Hence we’re kindly asking to approve the release patch. > FFE approved from the requirements side. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jim at jimrollenhagen.com Wed Feb 14 15:58:02 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 14 Feb 2018 10:58:02 -0500 Subject: [openstack-dev] [ironic][triploe] support for firmware update In-Reply-To: <81F9552C-298A-4394-835B-0641E2F4F4D9@telfer.org> References: <81F9552C-298A-4394-835B-0641E2F4F4D9@telfer.org> Message-ID: On Mon, Feb 12, 2018 at 6:52 PM, Stig Telfer wrote: > Hi Moshe - > > It seems a bit risky to automatically apply firmware updates. For > example, given a node will probably be rebooted for firmware updates to > take effect, if other vendors also did this then perhaps the node could > reboot unexpectedly in the middle of your update. In theory. > This depends on how one implements automatic firmware updates. We did something like this when I was at Rackspace via a number of hardware managers. Essentially, we created a hardware manager class for each type of hardware that we wanted to be able to update. We shipped the firmware in the agent ramdisk, and hardcoded the firmware version in code (as we would need to ship a new ramdisk to ship a new firmware anyway). Each hardware manager had a clean step that would check if the firmware needed an update, and do the update if required, rebooting afterwards. As clean steps run serially, there isn't much risk of them stepping on each other. The approach we’ve taken on handling firmware updates[1] has been to create > a hardware manager for verifying firmware values during node cleaning and > raising an exception if they do not match. The consequence is, nodes will > drop into maintenance mode for manual inspection / intervention. We’ve > then booted the node into a custom image to perform the update. > > Hope this helps, > Stig > > [1] https://github.com/stackhpc/stackhpc-ipa-hardware-managers > > > On 8 Feb 2018, at 07:43, Moshe Levi wrote: > > > > Hi all, > > > > I saw that ironic-python-agent support custom hardware manager. > > I would like to support firmware updates (In my case Mellanox nic) and I > was wandering how custom hardware manager can be used in such case? > There are a few examples of hardware managers out there that might be helpful[0][1]. These add clean steps that update firmware when the node goes through cleaning (which is after enrollment, and after an instance is deleted). > How it is integrated with ironic-python agent and also is there an > integration to tripleO as well. > I've never done much with tripleo, so I'm not sure if they have a built-in way to include a hardware manager or if you'd need to build your own ramdisk and tell tripleo to use that. As far as integrating it with ironic-python-agent, just make your hardware manager something that can be installed by pip, and add entrypoints similar to the example[2]. Then, just install it alongside the agent when building the image, and it will be included. // jim [0] https://github.com/openstack/proliantutils/blob/master/proliantutils/ipa_hw_manager/hardware_manager.py [1] https://github.com/openstack/ipa-example-hardware-managers/blob/master/example_hardware_managers/example_device.py [2] https://github.com/openstack/ipa-example-hardware-managers/blob/master/setup.cfg#L19 -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Feb 14 16:05:01 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Feb 2018 10:05:01 -0600 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> Message-ID: <0074a615-9440-b1c4-7226-8f3dc9c729d4@nemebean.com> On 02/13/2018 05:30 PM, David Moreau Simard wrote: > On Tue, Feb 13, 2018 at 5:53 PM, Ben Nemec wrote: >> >> I guess if RDO has chosen this path then we don't have much choice. > > This makes it sound like we had a choice to begin with. > We've already had a lot of discussions around the topic but we're > ultimately stuck between a rock and a hard place. > > We're in this together and it's important that everyone understands > what's going on. > > It's not a secret to anyone that Fedora is more or less the upstream to RHEL. > There's no py3 available in RHEL 7. > The alternative to making things work in Fedora is to use Software > Collections [1]. > > If you're not familiar with Software Collections for python, it's more > or less the installation of RPM packages in a virtualenv. > Installing the "rh-python35" SCL would: > - Set up a chroot in /opt/rh/rh-python35/root > - Set up a py35 interpreter at /opt/rh/rh-python35/root/usr/bin/python3 > > And then, when you would install packages *against* that SCL, they > would end up being installed > in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/. > > That means that you need *all* of your python packages to be built > against the software collections and installed in the right path. > > Python script with a #!/usr/bin/python shebang ? Probably not going to work. > Need python-requests ? Nope, sclo-python35-python-requests. > Need one of the 1000+ python packages maintained by RDO ? > Those need to be re-built and maintained against the SCL too. > > If you want to see what it looks like in practice, here's a Zuul spec > file [2] or the official docs for SCL [3]. Ick, I didn't realize SCLs were that bad. /me dons his fireproof suit I know this is a dirty word around these parts, but I note that EPEL appears to have python 3 packages... Ultimately, though, I'm not in a position to be making any definitive statements about how to handle this. RDO has more consumers than just TripleO. The purpose of my email was mostly to provide some historical perspective from back when we were doing TripleO CI on Fedora, why we're not doing that anymore, and in fact went so far as to explicitly disable Fedora in the undercloud installer. If Fedora is still our best option then so be it, but I don't want anyone to think it's going to be as simple as s/CentOS/Fedora/ (I assume no one does, but you know what they say about ass-u-me :-). > > Making stuff work on Fedora is not going to be easy for anyone but it > sure beats messing with 1500+ packages that we'd need to untangle > later. > Most of the hard work for Fedora is already done as far as packaging > is concerned, we never really stopped building packages for Fedora > [4]. > > It means we should be prepared once RHEL 8 comes out. > > [1]: https://www.softwarecollections.org/en/ > [2]: https://softwarefactory-project.io/r/gitweb?p=scl/zuul-distgit.git;a=blob;f=zuul.spec;h=6bba6a79c1f8ff844a9ea3715ab2cef1b12d323f;hb=refs/heads/master > [3]: https://www.softwarecollections.org/en/docs/guide/#chap-Packaging_Software_Collections > [4]: https://trunk.rdoproject.org/fedora-rawhide/report.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From corvus at inaugust.com Wed Feb 14 16:05:13 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 14 Feb 2018 08:05:13 -0800 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: (Andrea Frittoli's message of "Wed, 14 Feb 2018 15:13:30 +0000") References: <1516975504.9811.5@smtp.office365.com> <874ln8v6ye.fsf@meyer.lemoncheese.net> Message-ID: <87inazblqe.fsf@meyer.lemoncheese.net> Andrea Frittoli writes: >> That has no irrelevant-files matches, and so matches everything. If you >> drop the use of that template, it will work as expected. Or, if you can >> say with some certainty that nova's irrelevant-files set is not >> over-broad, you could move the irrelevant-files from nova's invocation >> into the template, or even the job, and drop nova's individual >> invocation. >> > I don't think projects in the integrated gate should remove themselves > from the > template, it really helps keeping consistency. > > The pattern I've seen is that most projects repeat the same list of > irrelevant files > over and over again in all of their integration tests, It would be handy in > future to > be able to set irrelevant-files on a template when it's consumed. > So we could have shared irrelevant files defined in the template, and > custom ones > added by each project when consuming the template. I don't this is is > possible today. > Does it sound like a reasonable feature request? A template may specify many jobs, so if we added something like that feature, what would the project-pipeline template application's irrelevant files apply to? All of the jobs in the template? We could do that. But it only takes one exception for this approach to fall short, and while a lot of irrelevant-files stanzas for a project are similar, I don't think having exceptions will be unusual. The only way to handle exceptions like that is to specify them with jobs, and we're back in the same situation. Also, combining irrelevant-files is very difficult to think about. Because it's two inverse matches, combining them ends up being the intersection, not the union. So if we implemented this, I don't think we should have any irrelevant-files in the template, only on template application. I still tend to think that irrelevant-files are almost invariably project-specific, so we should avoid using them in templates and job definitions (unless absolutely certain they are universally applicable), and we should only define them in one place -- in the project-pipeline definition for individual jobs. -Jim From prometheanfire at gentoo.org Wed Feb 14 16:09:47 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 14 Feb 2018 10:09:47 -0600 Subject: [openstack-dev] [requirements][trove][tatu][barbican][compass][daisycloud][freezer][fuel][nova][openstack-ansible][pyghmi][solum] Migration from pycrypto Message-ID: <20180214160947.dowuweoigacnfztt@gentoo.org> Development has stalled, (since 2014). It's been forked but now would be a good time to move to a more actively maintained crypto library like cryptography. Requirements wishes to drop pycrypto. Let me know if there's anything we can do to facilitate this. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From assaf at redhat.com Wed Feb 14 16:16:44 2018 From: assaf at redhat.com (Assaf Muller) Date: Wed, 14 Feb 2018 11:16:44 -0500 Subject: [openstack-dev] [neutron] [OVN] L3 traffic In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 11:24 PM, Numan Siddique wrote: > > > On Wed, Feb 14, 2018 at 4:19 AM, Assaf Muller wrote: >> >> I'm not aware of plans for OVN to supported distributed SNAT, therefor >> a networking node will still be required for the foreseeable future. >> >> On Mon, Jan 15, 2018 at 2:18 AM, wenran xiao wrote: >> > Hey all, >> > I have found Network OVN will support to distributed floating ip >> > >> > (https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html), >> > how about the snat in the future? Still need network node or not? >> > Any suggestions are welcomed. > > > OVN can select any node (or nodes if HA is enabled) to schedule a router as > long as the node has ovn-controller service running in it and > ovn-bridge-mappings configured properly. > So, If you have external connectivity in your compute nodes and you are fine > with any of these compute nodes doing the centralized snat, you don't need > to have a network node. To be clear that is at parity with ML2/OVS, you can install L3 agents on any node with external connectivity, regardless if it's also a compute node. Some deployment tools support this, like TripleO. > > Thanks > Numan > >> > >> > >> > Best regards >> > Ran >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Wed Feb 14 16:19:50 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Feb 2018 10:19:50 -0600 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> Message-ID: <158103a7-943e-8466-a7c9-3d4dd82b918c@nemebean.com> On 02/13/2018 10:24 PM, Haïkel wrote: > RDO has *yet* to choose a plan, and people were invited to work on the > "stabilized" repository draft [0]. If anyone has a better plan that fits all the > constraints, please share it asap. > Whatever the plan, we're launching it with the Rocky cycle. > > Among the constraints (but not limited to): > * EL8 is not available > * No Python3 on EL7 *and* no allocated resources to maintain it (that includes > rebuilding/maintaining *all* python modules + libraries) I have to admit I don't entirely understand this constraint. CentOS 7 is in support until 2024. I would think RHEL 7's timeline is similar or even longer. If Python 2 is going out of support in 2020, does that mean there will be no supported Python on CentOS for the last four years of its lifecycle? In fact, the more I think about this the more I feel like there's a fundamental problem with the way we're handling this transition. We're not the only ones who are going to feel the pain of having disjoint Python releases from 7 to 8. Anyone running a Python application now gets to not only do a major OS upgrade, but also a major Python upgrade. Sure, it's worse for us because we need to support EL 8 at release, but _everyone_ is going to feel some variation on this pain as they move forward. I realize this is a discussion that's probably above my pay grade, but I feel I would be remiss if I didn't point out that our Python support strategy seems very flawed. > * Bridge the gap between EL7 and EL8, Fedora 27/28 are the closest thing we > have to EL8 [1][2] > * SCL have a cost (and I cannot yet expose why but not jumping onto the SCL > bandwagon has proven to be the right bet) > * Have something stable enough so that upstream gate can use it. > That's why plan stress that updates will be gated (definition of how > is still open) > * Manage to align planets so that we can ship version X of OpenStack [3] on EL8 > without additional delay > > Well, I cannot say that I can't relate to what you're saying, though. [4] Indeed. This sounds like a pub track discussion if I ever heard one. :-) > > Regards, > H. > > [0] https://etherpad.openstack.org/p/stabilized-fedora-repositories-for-openstack > [1] Do not assume anything on EL8 (name included) it's more > complicated than that. > [2] Take a breath, but we might have to ship RDO as modules, not just RPMs or > Containers. I already have headaches about it. > [3] Do not ask which one, I do not know :) > [4] Good thing that next PTG will be in Dublin, I'll need a lot of > irish whiskey :) > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From hongbin034 at gmail.com Wed Feb 14 16:25:19 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 14 Feb 2018 11:25:19 -0500 Subject: [openstack-dev] [zun][kuryr][kuryr-libnetwork][neutron] Gate breakage due to removal of tag extension Message-ID: Hi all, Zun's gate is currently broken due to the removal of tag extension [1] at neutron side. The reason is that Zun has a dependency on Kuryr-libnetwork and Kuryr-libnetwork relies on the tag extension that was removed. A quick fixup is to revert the tag extension removal patch [2]. This will unblock the gate immediately. Potential alternative fixes are welcome as long as it can quickly unblock the gate. Your help is greatly appreciated. [1] https://review.openstack.org/#/c/534964/ [2] https://review.openstack.org/#/c/544179/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Feb 14 16:25:53 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 14 Feb 2018 08:25:53 -0800 Subject: [openstack-dev] [neutron] [networking-ovn] Non voting jobs for networking-ovn on neutron. In-Reply-To: References: Message-ID: On Mon, Dec 4, 2017 at 5:20 AM, Miguel Angel Ajo Pelayo wrote: > We were thinking about the option of having a couple of non-voting jobs > on > the neutron check for networking-ovn. It'd be great for us, in terms of > traceability, > we re-use a lot of the neutron unit test base clases/etc, and sometimes > we get hit by surprises. I don't think unit test base classes are meant to break, and regardless, a better path would be to move those pieces that you reuse into neutron-lib and consume from there. I know some drivers reuse a lot more than just the base test class, f.e. taking ml2 unit test classes verbatim; I don't think this is the use case that we should ultimately support. > > Sometimes some other changes hit us on the neutron scenario tests. > > So it'd be great to have them if you believe it's a reasonable thing. Yamamoto's concern is valid in that if we don't set clear standards of what could be included, we would open a can of worms with every stadium driver asking for a slot in check queue. That being said, I don't necessarily agree it's unacceptable; what I am saying is that drivers would need to fulfill specific requirements (for example, all tempest tests for major api extensions executed for reference implementation would need to pass; there may be more requirements than just that). I think having some non-voting *tempest* jobs for each major driver (ovn, odl, midonet?) could be useful. We have a job specific to ironic in our check queue. I would say accommodating for better integration with our own stadium participants can be even more helpful. Ihar From openstack at nemebean.com Wed Feb 14 16:41:59 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Feb 2018 10:41:59 -0600 Subject: [openstack-dev] [oslo] Meetings for the next two weeks Message-ID: Hi, I will be on PTO for the next Oslo meeting, and the week after that is the PTG so there will be no meeting. Doug Hellmann has offered to run the meeting next week if necessary, but as of this writing there is nothing new on the agenda and unless that changes I'm inclined to skip it. Many of us will be meeting face-to-face the following week anyway. Speaking of, feel free to add topics to https://etherpad.openstack.org/p/oslo-ptg-rocky if there's anything you'd like to discuss. As always, you don't have to wait for a scheduled meeting to start any discussions either. #openstack-oslo is open 24/7, although responses may be delayed after hours. :-) -Ben From andrea.frittoli at gmail.com Wed Feb 14 16:43:10 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 14 Feb 2018 16:43:10 +0000 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: <87inazblqe.fsf@meyer.lemoncheese.net> References: <1516975504.9811.5@smtp.office365.com> <874ln8v6ye.fsf@meyer.lemoncheese.net> <87inazblqe.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Feb 14, 2018 at 4:05 PM James E. Blair wrote: > Andrea Frittoli writes: > > >> That has no irrelevant-files matches, and so matches everything. If you > >> drop the use of that template, it will work as expected. Or, if you can > >> say with some certainty that nova's irrelevant-files set is not > >> over-broad, you could move the irrelevant-files from nova's invocation > >> into the template, or even the job, and drop nova's individual > >> invocation. > >> > > I don't think projects in the integrated gate should remove themselves > > from the > > template, it really helps keeping consistency. > > > > The pattern I've seen is that most projects repeat the same list of > > irrelevant files > > over and over again in all of their integration tests, It would be handy > in > > future to > > be able to set irrelevant-files on a template when it's consumed. > > So we could have shared irrelevant files defined in the template, and > > custom ones > > added by each project when consuming the template. I don't this is is > > possible today. > > Does it sound like a reasonable feature request? > > A template may specify many jobs, so if we added something like that > feature, what would the project-pipeline template application's > irrelevant files apply to? All of the jobs in the template? We could > do that. That's what I was thinking about, > But it only takes one exception for this approach to fall > short, and while a lot of irrelevant-files stanzas for a project are > similar, I don't think having exceptions will be unusual. The only way > to handle exceptions like that is to specify them with jobs, and we're > back in the same situation. > > Also, combining irrelevant-files is very difficult to think about. > Because it's two inverse matches, combining them ends up being the > intersection, not the union. So if we implemented this, I don't think > we should have any irrelevant-files in the template, only on template > application. > > I still tend to think that irrelevant-files are almost invariably > project-specific, so we should avoid using them in templates and job > definitions (unless absolutely certain they are universally applicable), > and we should only define them in one place -- in the project-pipeline > definition for individual jobs. > I agree with your concerns, but the problem is that the current implementation renders job templates rather useless. Each project has to re-add each job in a template in its pipeline content definition to be able to apply irrelevant files, and that will turn stale if a template is modified. With the migration to zuulv3 native jobs there is a lot of job renaming and adding/ removing jobs going on, so for instance if a job is removed what used to be a setting irrelevant files may become running an extra job. I don't really have a solution for this, but perhaps someone has an idea? Andrea Frittoli (andreaf) > -Jim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Feb 14 16:59:15 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 14 Feb 2018 10:59:15 -0600 Subject: [openstack-dev] [cinder] No Weekly Meeting 2/21/18 or 2/28/18 Message-ID: <3e992378-121d-b332-c36d-f9161e272bf7@gmail.com> Team, We will not be having a weekly meeting on 2/21/2018 or 2/28/2018 due to people traveling to the PTG and the PTG itself. Regular meeting will return on 3/7/2018 . See you all at the PTG! Jay From dmsimard at redhat.com Wed Feb 14 17:13:14 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Wed, 14 Feb 2018 12:13:14 -0500 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <158103a7-943e-8466-a7c9-3d4dd82b918c@nemebean.com> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> <158103a7-943e-8466-a7c9-3d4dd82b918c@nemebean.com> Message-ID: On Wed, Feb 14, 2018 at 11:19 AM, Ben Nemec wrote: > > I have to admit I don't entirely understand this constraint. CentOS 7 is in > support until 2024. I would think RHEL 7's timeline is similar or even > longer. If Python 2 is going out of support in 2020, does that mean there > will be no supported Python on CentOS for the last four years of its > lifecycle? The OpenStack community is definitely not expected to support py2 beyond 2020. If RHEL and CentOS wants to support py2 beyond that date, the burden is on them. The RHEL 8 release date is unknown. We can only speculate that it should be "sometime soon" based on previous release dates [1]. I don't know if it's going to be an official goal to drop py27 support in OpenStack for Rocky but we can't wait at the last minute -- py3 support has been a goal for a long time [2]. It doesn't mean that RDO has to support a python2 version of OpenStack on EL7 after upstream has dropped support for it. It's similar to how EL6 support was eventually dropped after moving on from py26 (or was it py25?) and we started shipping on EL7. > In fact, the more I think about this the more I feel like there's a > fundamental problem with the way we're handling this transition. We're not > the only ones who are going to feel the pain of having disjoint Python > releases from 7 to 8. Anyone running a Python application now gets to not > only do a major OS upgrade, but also a major Python upgrade. Sure, it's > worse for us because we need to support EL 8 at release, but _everyone_ is > going to feel some variation on this pain as they move forward. Python 3 has been out since 2008 [3], yup, 10 years ago.. and here we are. I remember when most of this board was red [4]. > I realize this is a discussion that's probably above my pay grade, but I > feel I would be remiss if I didn't point out that our Python support > strategy seems very flawed. It's no use questioning the decisions that lead RHEL7 to ship without py3 in 2014, we can only look forward at this point :) [1]: https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Version_history [2]: https://governance.openstack.org/tc/goals/pike/python35.html [3]: https://en.wikipedia.org/wiki/History_of_Python#Version_3 [4]: https://python3wos.appspot.com/ David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From billy.olsen at gmail.com Wed Feb 14 17:49:22 2018 From: billy.olsen at gmail.com (Billy Olsen) Date: Wed, 14 Feb 2018 10:49:22 -0700 Subject: [openstack-dev] [charms] In-Reply-To: References: Message-ID: <59fd87f0-0e35-97d7-ec64-3e7755b5eb18@gmail.com> Seems very reasonable. +1 On 02/14/2018 05:35 AM, Alex Kavanagh wrote: > Yes, that seems like a reasonable approach. +1 > > On Wed, Feb 14, 2018 at 11:29 AM, Liam Young > wrote: > > Hi, > > I would like to propose that we do not support the notifications > method for automatically creating DNS records in Queens+. This method > for achieving Neutron integration has been superseded both upstream > and in the charms. By removing support for it in Queens we prevent the > charm from attempting to make designate v1 api calls for Queens+ which > is a positive thing given it will have been removed ( > https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues > > ). > > Thanks > Liam > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Alex Kavanagh - Software Engineer > Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Wed Feb 14 19:55:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Feb 2018 13:55:53 -0600 Subject: [openstack-dev] [requirements][trove][tatu][barbican][compass][daisycloud][freezer][fuel][nova][openstack-ansible][pyghmi][solum] Migration from pycrypto In-Reply-To: <20180214160947.dowuweoigacnfztt@gentoo.org> References: <20180214160947.dowuweoigacnfztt@gentoo.org> Message-ID: <20180214195552.GA18614@sm-xps> On Wed, Feb 14, 2018 at 10:09:47AM -0600, Matthew Thode wrote: > Development has stalled, (since 2014). It's been forked but now would > be a good time to move to a more actively maintained crypto library like > cryptography. > > Requirements wishes to drop pycrypto. Let me know if there's anything > we can do to facilitate this. > > -- > Matthew Thode (prometheanfire) We did have a discussion on the ML, and I think a little at one of the PTGs, about the path forward for this. IIRC, there was one other potential supported package that was considered for an option, but we settled on cryptography as the recommended path forward to get off of pycrypto. I think it had to do with ease of being able to just drop in the new package with minimal affected code. From prometheanfire at gentoo.org Wed Feb 14 19:59:29 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 14 Feb 2018 13:59:29 -0600 Subject: [openstack-dev] [requirements][trove][tatu][barbican][compass][daisycloud][freezer][fuel][nova][openstack-ansible][pyghmi][solum] Migration from pycrypto In-Reply-To: <20180214195552.GA18614@sm-xps> References: <20180214160947.dowuweoigacnfztt@gentoo.org> <20180214195552.GA18614@sm-xps> Message-ID: <20180214195929.muysdpwt77y3lln5@gentoo.org> On 18-02-14 13:55:53, Sean McGinnis wrote: > On Wed, Feb 14, 2018 at 10:09:47AM -0600, Matthew Thode wrote: > > Development has stalled, (since 2014). It's been forked but now would > > be a good time to move to a more actively maintained crypto library like > > cryptography. > > > > Requirements wishes to drop pycrypto. Let me know if there's anything > > we can do to facilitate this. > > > > -- > > Matthew Thode (prometheanfire) > > We did have a discussion on the ML, and I think a little at one of the PTGs, > about the path forward for this. IIRC, there was one other potential supported > package that was considered for an option, but we settled on cryptography as > the recommended path forward to get off of pycrypto. I think it had to do with > ease of being able to just drop in the new package with minimal affected code. > Yep, I remember it, I'm not mentioning it because I'd like to focus on moving to cryptography rather than move to the fork. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed Feb 14 20:10:10 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 14 Feb 2018 14:10:10 -0600 Subject: [openstack-dev] [FFE][requirements][release][oslo] osprofiler bug fix needed In-Reply-To: References: Message-ID: <20180214201010.hg4ghct4md4ukitk@gentoo.org> On 18-02-12 02:14:51, Nguyễn Trọng Vĩnh (Tovin Seven) wrote: > Hello, > > Currently, Oslo release for Queens is out. > However, OSProfiler faces an issue that make some Nova CLI command not > working. > Detail for this issue: https://launchpad.net/bugs/1743586 > > Patch that fix this bug: https://review.openstack.org/#/c/535219/ > Back port for this: https://review.openstack.org/#/c/537735/ > Release new version for OSProfiler with this bug fix in Queens: > https://review.openstack.org/#/c/541645/ > > Therefore, I send this email to get a FFE for it. > Requirements is fine with this, sorry for the delay. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Wed Feb 14 20:51:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 14 Feb 2018 14:51:57 -0600 Subject: [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> Message-ID: <55df94ae-c2a5-23b5-d330-93c50e0211b7@gmail.com> On 2/5/2018 9:00 PM, Matt Riedemann wrote: > Given the size and detail of this thread, I've tried to summarize the > problems and possible solutions/workarounds in this etherpad: > > https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu > > > For those working on this, please check that what I have written down is > correct and then we can try to make some kind of plan for resolving this. Jay has a spec up for review now: https://review.openstack.org/#/c/544683/ It would be great to get operator feedback on that. -- Thanks, Matt From hguemar at fedoraproject.org Wed Feb 14 21:26:41 2018 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Feb 2018 22:26:41 +0100 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <0074a615-9440-b1c4-7226-8f3dc9c729d4@nemebean.com> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> <0074a615-9440-b1c4-7226-8f3dc9c729d4@nemebean.com> Message-ID: 2018-02-14 17:05 GMT+01:00 Ben Nemec : > > > On 02/13/2018 05:30 PM, David Moreau Simard wrote: >> >> On Tue, Feb 13, 2018 at 5:53 PM, Ben Nemec wrote: >>> >>> >>> I guess if RDO has chosen this path then we don't have much choice. >> >> >> This makes it sound like we had a choice to begin with. >> We've already had a lot of discussions around the topic but we're >> ultimately stuck between a rock and a hard place. >> >> We're in this together and it's important that everyone understands >> what's going on. >> >> It's not a secret to anyone that Fedora is more or less the upstream to >> RHEL. >> There's no py3 available in RHEL 7. >> The alternative to making things work in Fedora is to use Software >> Collections [1]. >> >> If you're not familiar with Software Collections for python, it's more >> or less the installation of RPM packages in a virtualenv. >> Installing the "rh-python35" SCL would: >> - Set up a chroot in /opt/rh/rh-python35/root >> - Set up a py35 interpreter at /opt/rh/rh-python35/root/usr/bin/python3 >> >> And then, when you would install packages *against* that SCL, they >> would end up being installed >> in /opt/rh/rh-python35/root/usr/lib/python3.5/site-packages/. >> >> That means that you need *all* of your python packages to be built >> against the software collections and installed in the right path. >> >> Python script with a #!/usr/bin/python shebang ? Probably not going to >> work. >> Need python-requests ? Nope, sclo-python35-python-requests. >> Need one of the 1000+ python packages maintained by RDO ? >> Those need to be re-built and maintained against the SCL too. >> >> If you want to see what it looks like in practice, here's a Zuul spec >> file [2] or the official docs for SCL [3]. > > > Ick, I didn't realize SCLs were that bad. > And that's only the tip of the iceberg :) > /me dons his fireproof suit > > I know this is a dirty word around these parts, but I note that EPEL appears > to have python 3 packages... > All I can say is that option was put on the table. > Ultimately, though, I'm not in a position to be making any definitive > statements about how to handle this. RDO has more consumers than just > TripleO. The purpose of my email was mostly to provide some historical > perspective from back when we were doing TripleO CI on Fedora, why we're not > doing that anymore, and in fact went so far as to explicitly disable Fedora > in the undercloud installer. If Fedora is still our best option then so be > it, but I don't want anyone to think it's going to be as simple as > s/CentOS/Fedora/ (I assume no one does, but you know what they say about > ass-u-me :-). > I agree it won't be simple, we will have to provide those repositories, determine how we will gate updates, fix puppet modules, POI, etc.. and that's only a beginning. That's why we won't be providing raw Fedora but rather a curated version and at some point, we'll likely freeze it. That's kinda similar to how EL8 is made, but it won't be EL8. :o) Let's say that the time is ticking, if we want to ship a productized OpenStack distro on Python3, and possibly on EL8 (Hint: I don't know when it will be released, and moreover, I'm not the one who gets to decide when RHOSP will support EL8), we're about to reach the point of no-return. H. > >> >> Making stuff work on Fedora is not going to be easy for anyone but it >> sure beats messing with 1500+ packages that we'd need to untangle >> later. >> Most of the hard work for Fedora is already done as far as packaging >> is concerned, we never really stopped building packages for Fedora >> [4]. >> >> It means we should be prepared once RHEL 8 comes out. >> >> [1]: https://www.softwarecollections.org/en/ >> [2]: >> https://softwarefactory-project.io/r/gitweb?p=scl/zuul-distgit.git;a=blob;f=zuul.spec;h=6bba6a79c1f8ff844a9ea3715ab2cef1b12d323f;hb=refs/heads/master >> [3]: >> https://www.softwarecollections.org/en/docs/guide/#chap-Packaging_Software_Collections >> [4]: https://trunk.rdoproject.org/fedora-rawhide/report.html >> >> David Moreau Simard >> Senior Software Engineer | OpenStack RDO >> >> dmsimard = [irc, github, twitter] >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tpb at dyncloud.net Wed Feb 14 21:53:22 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 14 Feb 2018 16:53:22 -0500 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> Message-ID: <20180214215322.faxmj4pkltk2nhaq@barron.net> On 13/02/18 16:53 -0600, Ben Nemec wrote: > > >On 02/13/2018 01:57 PM, Tom Barron wrote: >>Since python 2.7 will not be maintained past 2020 [1] it is a >>reasonable conjecture that downstream distributions >>will drop support for python 2 between now and then, perhaps as >>early as next year. > >I'm not sure I agree. I suspect python 2 support will not go quietly >into that good night. Personally I anticipate a lot of kicking and >screaming right up to the end, especially from change averse >enterprise users. > >But that's neither here nor there. I think we're all in agreement >that python 3 support is needed. :-) Yeah, but you raise a good issue. How likely is it that EL8 will choose -- perhaps under duress -- to support both python 2 and python 3 in the next big downstream release. If this is done long enough that we can support TripleO deployments on CentOS 8 using python2 while at the same time testing TripleO deployments on CentOS using python3 then TripleO support for Fedora wouldn't be necessary. Perhaps this question is settled, perhaps it is open. Let's try to nail down which for the record. > >>In Pike, OpenStack projects, including TripleO, added python 3 unit >>tests.  That effort was a good start, but likely we can agree that >>it is *only* a start to gaining confidence that real life TripleO >>deployments will "just work" running python 3.  As agreed in the >>TripleO community meeting, this email is intended to kick off a >>discussion in advance of PTG on what else needs to be done. >> >>In this regard it is worth observing that TripleO currently only >>supports CentOS deployments and CentOS won't have python 3 support >>until RHEL does, which may be too late to test deploying with >>python3 before support for python2 is dropped.  Fedora does have >>support for python 3 and for this reason RDO has decided [2] to >>begin work to run with *stabilized* Fedora repositories in the Rocky >>cycle, aiming to be ready on time to migrate to Python 3 and support >>its use in downstream and upstream CI pipelines. > >So that means we'll never have Python 3 on CentOS 7 and we need to >start supporting Fedora again in order to do functional testing on >py3? That's potentially messy. My recollection of running TripleO CI >on Fedora is that it was, to put it nicely, a maintenance headache. >Even with the "stabilized" repos from RDO, TripleO has a knack for >hitting edge case bugs in a fast-moving distro like Fedora. I guess >it's not entirely clear to me what the exact plan is since there's >some discussion of frozen snapshots and such, which might address the >fast-moving part. > >It also means more CI jobs, unless we're okay with dropping CentOS >support for some scenarios and switching them to Fedora. Given the >amount of changes between CentOS 7 and current Fedora that's a pretty >big gap in our testing. > >I guess if RDO has chosen this path then we don't have much choice. >As far as next steps, the first thing that would need to be done is to >get TripleO running on Fedora again. I suggest starting with https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377 >:-) > >-Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hguemar at fedoraproject.org Wed Feb 14 22:11:57 2018 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 14 Feb 2018 23:11:57 +0100 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: <20180214215322.faxmj4pkltk2nhaq@barron.net> References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> <20180214215322.faxmj4pkltk2nhaq@barron.net> Message-ID: 2018-02-14 22:53 GMT+01:00 Tom Barron : > On 13/02/18 16:53 -0600, Ben Nemec wrote: >> >> >> >> On 02/13/2018 01:57 PM, Tom Barron wrote: >>> >>> Since python 2.7 will not be maintained past 2020 [1] it is a reasonable >>> conjecture that downstream distributions >>> will drop support for python 2 between now and then, perhaps as early as >>> next year. >> >> >> I'm not sure I agree. I suspect python 2 support will not go quietly into >> that good night. Personally I anticipate a lot of kicking and screaming >> right up to the end, especially from change averse enterprise users. >> >> But that's neither here nor there. I think we're all in agreement that >> python 3 support is needed. :-) > > > Yeah, but you raise a good issue. How likely is it that EL8 will choose -- > perhaps under duress -- to support both python 2 and python 3 in the next > big downstream release. If this is done long enough that we can support > TripleO deployments on CentOS 8 using python2 while at the same time testing > TripleO deployments on CentOS using python3 then TripleO support for Fedora > wouldn't be necessary. > > Perhaps this question is settled, perhaps it is open. Let's try to nail > down which for the record. > All I can say is that question is definitely settled. As far as OpenStack is concerned, there will be no Python2 on EL8. > >> >>> In Pike, OpenStack projects, including TripleO, added python 3 unit >>> tests. That effort was a good start, but likely we can agree that it is >>> *only* a start to gaining confidence that real life TripleO deployments will >>> "just work" running python 3. As agreed in the TripleO community meeting, >>> this email is intended to kick off a discussion in advance of PTG on what >>> else needs to be done. >>> >>> In this regard it is worth observing that TripleO currently only supports >>> CentOS deployments and CentOS won't have python 3 support until RHEL does, >>> which may be too late to test deploying with python3 before support for >>> python2 is dropped. Fedora does have support for python 3 and for this >>> reason RDO has decided [2] to begin work to run with *stabilized* Fedora >>> repositories in the Rocky cycle, aiming to be ready on time to migrate to >>> Python 3 and support its use in downstream and upstream CI pipelines. >> >> >> So that means we'll never have Python 3 on CentOS 7 and we need to start >> supporting Fedora again in order to do functional testing on py3? That's >> potentially messy. My recollection of running TripleO CI on Fedora is that >> it was, to put it nicely, a maintenance headache. Even with the >> "stabilized" repos from RDO, TripleO has a knack for hitting edge case bugs >> in a fast-moving distro like Fedora. I guess it's not entirely clear to me >> what the exact plan is since there's some discussion of frozen snapshots and >> such, which might address the fast-moving part. >> >> It also means more CI jobs, unless we're okay with dropping CentOS support >> for some scenarios and switching them to Fedora. Given the amount of >> changes between CentOS 7 and current Fedora that's a pretty big gap in our >> testing. >> >> I guess if RDO has chosen this path then we don't have much choice. As >> far as next steps, the first thing that would need to be done is to get >> TripleO running on Fedora again. I suggest starting with >> https://github.com/openstack/instack-undercloud/blob/3e702f3bdfea21c69dc8184e690f26e142a13bff/instack_undercloud/undercloud.py#L1377 >> :-) >> >> -Ben > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Wed Feb 14 22:14:14 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Feb 2018 16:14:14 -0600 Subject: [openstack-dev] [tripleo][python3] python3 readiness? In-Reply-To: References: <20180213195745.pzucdooks24nbaqr@barron.net> <06558ec3-3452-0e7d-3f6d-c0897ceff2a7@nemebean.com> <0074a615-9440-b1c4-7226-8f3dc9c729d4@nemebean.com> Message-ID: On 02/14/2018 03:26 PM, Haïkel wrote: > I agree it won't be simple, we will have to provide those > repositories, determine how > we will gate updates, fix puppet modules, POI, etc.. and that's only a > beginning. > > That's why we won't be providing raw Fedora but rather a curated > version and at some > point, we'll likely freeze it. That's kinda similar to how EL8 is > made, but it won't be EL8. :o) Yeesh, I hadn't looked at it that way, but we would basically be doing the EL8 process in parallel with the actual EL8. I assume the reason we can't use the actual EL8 pre-release whenever it becomes a thing is that there isn't a corresponding CentOS pre-release that would be usable upstream? > > Let's say that the time is ticking, if we want to ship a productized > OpenStack distro on > Python3, and possibly on EL8 (Hint: I don't know when it will be > released, and moreover, > I'm not the one who gets to decide when RHOSP will support EL8), we're > about to reach > the point of no-return. [snarky comment redacted in the interest of keeping my job :-)] From openstack at nemebean.com Wed Feb 14 22:16:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Feb 2018 16:16:19 -0600 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard In-Reply-To: References: Message-ID: On 02/09/2018 08:49 AM, Jiri Tomasek wrote: > *Step 2. network-environment -> NIC configs* > > Second step of network configuration is NIC config. For this > network-environment.yaml is used which references NIC config templates > which define network_config in their resources section. User is > currently required to configure these templates manually. We would like > to provide interactive view which would allow user to setup these > templates using TripleO UI. A good example is a standalone tool created > by Ben Nemec [3]. > > There is currently work aimed for Pike to introduce jinja templating for > network environments and templates [4] (single-nic-with-vlans, > bond-with-vlans) to support composable networks and roles (integrate > data from roles_data.yaml and network_data.yaml) It would be great if we > could move this one step further by using these samples as a starting > point and let user specify full NIC configuration. > > Available information at this point: > - list of roles and networks as well as which networks need to be > configured at which role's NIC Config template > - os-net-config schema which defines NIC configuration elements and > relationships [5] > - jinja templated sample NIC templates > > Requirements: > - provide feedback to the user about networks assigned to role and have > not been configured in NIC config yet I don't have much to add on this point, but I will note that because my UI is standalone and pre-dates composable networks it takes the opposite approach. As a user adds a network to a role, it exposes the configuration for that network. Since you have the networks ahead of time, you can obviously expose all of those settings up front and ensure the correct networks are configured for each nic-config. I say this mostly for everyone's awareness so design elements of my tool don't get copied where they don't make sense. > - let user construct network_config section of NIC config templates for > each role (brigdes/bonds/vlans/interfaces...) > - provide means to assign network to vlans/interfaces and automatically > construct network_config section parameter references So obviously your UI code is going to differ, but I will point out that the code in my tool for generating the actual os-net-config data is semi-standalone: https://github.com/cybertron/tripleo-scripts/blob/master/net_processing.py It's also about 600 lines of code and doesn't even handle custom roles or networks yet. I'm not clear whether it ever will at this point given the change in my focus. Unfortunately the input JSON schema isn't formally documented, although the unit tests do include a number of examples. https://github.com/cybertron/tripleo-scripts/blob/master/test-data/all-the-things/nic-input.json covers quite a few different cases. > - populate parameter definitions in NIC config templates based on > role/networks assignment > - populate parameter definitions in NIC config templates based on > specific elements which use them e.g. BondInterfaceOvsOptions in case > when ovs_bond is used I guess there's two ways to handle this - you could use the new jinja templating to generate parameters, or you could handle it in the generation code. I'm not sure if there's a chicken-and-egg problem with the UI generating jinja templates, but that's probably the simplest option if it works. The approach I took with my tool was to just throw all the parameters into all the files and if they're unused then oh well. With jinja templating you could do the same thing - just copy a single boilerplate parameter header that includes the jinja from the example nic-configs and let the templating handle all the logic for you. It would be cleaner to generate static templates that don't need to be templated, but it would require re-implementing all of the custom network logic for the UI. I'm not sure being cleaner is sufficient justification for doing that. > - store NIC config templates in deployment plan and reference them from > network-environment.yaml > > Problems to solve: > As a biggest problem to solve I see defining logic which would > automatically handle assigning parameters to elements in network_config > based on Network which user assigns to the element. For example: Using > GUI, user is creating network_config for compute role based on > network/config/multiple-nics/compute.yaml, user adds an interface and > assigns the interface to Tenant network. Resulting template should then > automatically populate addresses/ip_netmask: get_param: TenantIpSubnet. > Question is whether all this logic should live in GUI or should GUI pass > simplified format to Mistral workflow which will convert it to proper > network_config format and populates the template with it. I guess the fact that I separated the UI and config generation code in my tool is my answer to this question. I don't remember all of my reasons for that design, but I think the main thing was to keep the input and generation cleanly separated. Otherwise there was a danger of making a UI change and having it break the generation process because they were tightly coupled. Having a JSON interface between the two avoids a lot of those problems. It also made it fairly easy to unit test the generation code, whereas trying to mock out all of the UI elements would have been a fragile nightmare. It does require a bunch of translation code[1], but a lot of it is fairly boilerplate (just map UI inputs to JSON keys). 1: https://github.com/cybertron/tripleo-scripts/blob/171aedabfead1f27f4dc0fce41a8b82da28923ed/net-iso-gen.py#L515 Hope this helps. -Ben From ekcs.openstack at gmail.com Wed Feb 14 22:24:37 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 14 Feb 2018 14:24:37 -0800 Subject: [openstack-dev] [monasca][congress] help configuring monasca for gate In-Reply-To: References: Message-ID: Thank Witek =) We're looking at getting pushed alarm data from Monasca via webhook. Fabiog started that quite a while back and we're hoping to revive it. Not sure there is any feature requests. But I do want to understand the authentication situation in Monasca webhook. Wondering whether Congress should require keystone auth in the webhook request or expect unauthenticated requests. On a much more ambitious and speculative front, we're also thinking about how Congress may be able to leverage Monasca to evaluate certain policies. It's also something we explored with fabiog before. For example, if there is a rule that identifies low usage servers: underutilized_servers(server_id) :- ceilometer:statistics(meter_name='cpu_util',resource_id=server_id, avg=avg), builtin:lt(avg, 10) There may be a way for Congress to (semi) automatically create a corresponding Monasca alarm and rewrite the rule to depend on the alarm. I'd also love to hear if there are any other thoughts for how one project may benefit from the other. Eric Kao (ekcs) On 2/13/18, 6:45 AM, "Bedyk, Witold" wrote: >Hi Eric, > >glad to hear the problems are solved :) > >What are your plans around integration with Monasca? Please let us know >if you have related feature requests. > >Cheers >Witek > > >> -----Original Message----- >> From: Eric K [mailto:ekcs.openstack at gmail.com] >> Sent: Dienstag, 13. Februar 2018 03:59 >> To: OpenStack Development Mailing List (not for usage questions) >> >> Subject: Re: [openstack-dev] [monasca][congress] help configuring >>monasca >> for gate >> >> Oops. Nevermind. Looks like it's working now. >> >> On 2/12/18, 5:00 PM, "Eric K" wrote: >> >> >Hi Monasca folks, >> >I'm trying to configure monasca in congress gate [1] and modeled it >> >after this monasca playbook [2]. But I get: >> >rsync: change_dir "/home/zuul/src/*/openstack/monasca-common" failed: >> >No such file or directory (2) >> > >> >http://logs.openstack.org/22/530522/1/check/congress-devstack-api-mysql >> >/16 >> >6 >> >d935/logs/devstack-gate-setup-workspace-new.txt.gz#_2017-12- >> 30_01_53_41 >> >_60 >> >7 >> > >> > >> >Any hints on what I need to do differently? Thanks! >> > >> >[1] https://review.openstack.org/#/c/530522/ >> >[2] >> >https://github.com/openstack/monasca- >> api/blob/master/playbooks/legacy/m >> >ona >> >s >> >ca-tempest-base/run.yaml >> > >> > >> >> >> >> __________________________________________________________ >> ________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev- >> request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Feb 15 00:01:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 14 Feb 2018 18:01:41 -0600 Subject: [openstack-dev] [nova] Queens blueprint burndown chart Message-ID: I sent a similar email after Pike was released [1] and these are our blueprint burndown chart results for Queens [2]. Comparing to Pike, the trends are similar, with the overall numbers down. Things ramp up until the spec freeze, then tail off, with a little spike toward the end to get things in by feature freeze. Comparing final numbers to Pike ------------------------------- Max targeted / approved for Pike: 76 / 69 Max targeted / approved for Queens: 65 / 53 Final completed for Pike: 50 Final completed for Queens: 42 -- Within targeted/approved/completed, the relative numbers are about the same and our percentage of approved / completed is actually better in Queens (72% completion percentage of approved blueprints in Pike compared to 79% completion percentage of approved blueprints in Queens). So while we targeted fewer blueprints in Queens, we did a better job of actually completing them. To me, this indicates some level of stability with the things we've been working on over the last several releases, particularly with respect to cells v2 and placement. It likely also means there are just fewer people / organizations trying to contribute, which isn't a surprise to me with the maturity of OpenStack. This can be both a good thing a bad thing, but I'm not going to try and get into that here. We can talk about this and more during the PTG when we go through our Queens retrospective [3]. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121875.html [2] https://docs.google.com/spreadsheets/d/e/2PACX-1vRh5glbJ44-Ru2iARidNRa7uFfn2yjiRPjHIEQOc3Fjp5YDAlcMmXkYAEFW0WNhALl010T4rzyChuO9/pubhtml?gid=128173249&single=true [3] https://etherpad.openstack.org/p/nova-queens-retrospective -- Thanks, Matt From gmann at ghanshyammann.com Thu Feb 15 01:01:50 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Feb 2018 10:01:50 +0900 Subject: [openstack-dev] [nova][neutron][infra] zuul job definitions overrides and the irrelevant-file attribute In-Reply-To: References: <1516975504.9811.5@smtp.office365.com> <874ln8v6ye.fsf@meyer.lemoncheese.net> <87inazblqe.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Feb 15, 2018 at 1:43 AM, Andrea Frittoli wrote: > > > On Wed, Feb 14, 2018 at 4:05 PM James E. Blair wrote: >> >> Andrea Frittoli writes: >> >> >> That has no irrelevant-files matches, and so matches everything. If >> >> you >> >> drop the use of that template, it will work as expected. Or, if you >> >> can >> >> say with some certainty that nova's irrelevant-files set is not >> >> over-broad, you could move the irrelevant-files from nova's invocation >> >> into the template, or even the job, and drop nova's individual >> >> invocation. >> >> >> > I don't think projects in the integrated gate should remove themselves >> > from the >> > template, it really helps keeping consistency. >> > >> > The pattern I've seen is that most projects repeat the same list of >> > irrelevant files >> > over and over again in all of their integration tests, It would be handy >> > in >> > future to >> > be able to set irrelevant-files on a template when it's consumed. >> > So we could have shared irrelevant files defined in the template, and >> > custom ones >> > added by each project when consuming the template. I don't this is is >> > possible today. >> > Does it sound like a reasonable feature request? >> >> A template may specify many jobs, so if we added something like that >> feature, what would the project-pipeline template application's >> irrelevant files apply to? All of the jobs in the template? We could >> do that. > > > That's what I was thinking about, > >> >> But it only takes one exception for this approach to fall >> short, and while a lot of irrelevant-files stanzas for a project are >> similar, I don't think having exceptions will be unusual. The only way >> to handle exceptions like that is to specify them with jobs, and we're >> back in the same situation. >> >> Also, combining irrelevant-files is very difficult to think about. >> Because it's two inverse matches, combining them ends up being the >> intersection, not the union. So if we implemented this, I don't think >> we should have any irrelevant-files in the template, only on template >> application. >> >> I still tend to think that irrelevant-files are almost invariably >> project-specific, so we should avoid using them in templates and job >> definitions (unless absolutely certain they are universally applicable), >> and we should only define them in one place -- in the project-pipeline >> definition for individual jobs. > > > I agree with your concerns, but the problem is that the current > implementation > renders job templates rather useless. Each project has to re-add each job in > a > template in its pipeline content definition to be able to apply irrelevant > files, and > that will turn stale if a template is modified. > > With the migration to zuulv3 native jobs there is a lot of job renaming and > adding/ > removing jobs going on, so for instance if a job is removed what used to be > a setting > irrelevant files may become running an extra job. > > I don't really have a solution for this, but perhaps someone has an idea? I am in favor of idea of not defining the irrelevant_files in base job definition or template and have them defined by project in project-pipeline only. This solve most of the issue even that can ask each project to define the common irrelevant_files. With that, should we keep the Template limited to system one only which are mentioned here [1] ? i mean no 'integrate-gate' etc template. ..1 https://docs.openstack.org/infra/manual/zuulv3.html#what-to-convert -gmann > > Andrea Frittoli (andreaf) > >> >> -Jim > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Thu Feb 15 01:06:44 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Feb 2018 01:06:44 +0000 Subject: [openstack-dev] [Election] PTL Election Results & Conclusion Message-ID: Hello Everyone! Thank you to the electorate, to all those who voted and to all candidates who put their name forward for Project Team Lead (PTL) in this election. A healthy, open process breeds trust in our decision making capability thank you to all those who make this process possible. Now for the results of the PTL election process, please join me in extending congratulations to the following PTLs: * Barbican : Ade Lee * Blazar : Masahito Muroi * Chef OpenStack : Samuel Cassiba * Cinder : Jay Bryant * Cloudkitty : Christophe Sauthier * Congress : Eric Kao * Cyborg : Zhipeng Huang * Designate : Graham Hayes * Documentation : Petr Kovar * Dragonflow : Omer Anson * Ec2 Api : Andrey Pavlov * Freezer : Saad Zaher * Glance : Erno Kuvaja * Heat : Rico Lin * Horizon : Ivan Kolodyazhny * I18n : Frank Kloeker * Infrastructure : Clark Boylan * Ironic : Julia Kreger * Karbor : Ying Chen * Keystone : Lance Bragstad * Kolla : Jeffery Zhang * Kuryr : Daniel Mellado * Loci : Sam Yaple * Magnum : Spyros Trigazis * Manila : Tom Barron * Masakari : Sampath Priyankara * Mistral : Dougal Matthews * Monasca : Witold Bedyk * Murano : Rong Zhu * Neutron : Miguel Lavelle * Nova : Melanie Witt * Octavia : Michael Johnson * OpenStackAnsible : Jean-Philippe Evrard * OpenStackClient : Dean Troyer * OpenStackSDK : Monty Taylor * OpenStack Charms : James Page * OpenStack Helm : Matt McEuen * Oslo : Ben Nemec * Packaging Rpm : Javier Peña * Puppet OpenStack : Mohammed Naser * Quality Assurance : Ghanshyam Mann * Rally : Andrey Kurilin * RefStack : Chris Hoge * Release Management : Sean McGinnis * Requirements : Matthew Thode * Sahara : Telles Mota Vidal Nobrega * Searchlight : Steve McLellan * Senlin : XueFeng Liu * Solum : Rong Zhu * Storlets : Kota Tsuyuzaki * Swift : John Dickinson * Tacker : Yong Sheng Gong * Telemetry : Julien Danjou * Tricircle : Zhiyuan Cai * Tripleo : Alex Schultz * Trove : Zhao Chao * Vitrage : Ifat Afek * Watcher : Alexander Chadin * Winstackers : Claudiu Belu * Zaqar : Wang Hao * Zun : Feng Shengqin Elections: Kolla: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_eb44669f6742dd4b Mistral: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab Quality_Assurance: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_274f37d8e5497358 Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all involved in the PTL election process, -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From luo.lujin at jp.fujitsu.com Thu Feb 15 02:19:48 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Thu, 15 Feb 2018 02:19:48 +0000 Subject: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading? Message-ID: Hello everyone, Can someone be nice enough to point me to the Rocky Fast Forward Upgrading etherpad page? I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad for it is not listed in [2]. Thanks in advance. [1] https://www.openstack.org/ptg/#tab_schedule [2] https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads Best regards, Lujin From melwittt at gmail.com Thu Feb 15 02:37:04 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 14 Feb 2018 18:37:04 -0800 Subject: [openstack-dev] [nova] Rocky PTG early planning In-Reply-To: References: Message-ID: <4A2CD313-F89B-4038-80E1-12CCFB088896@gmail.com> > On Jan 8, 2018, at 10:33, Matt Riedemann wrote: > > As the Queens release winds to a close, I've started thinking about topics for Rocky that can be discussed at the PTG. > > I've created an etherpad [1] for just throwing various topics in there, completely free-form at this point; just remember to add your name next to any topic you add. > > [1] https://etherpad.openstack.org/p/nova-ptg-rocky We have the PTG coming up soon in just 12 days and I wanted to remind everybody to please add your discussion topics to the etherpad ^. We’ll be using the etherpad as our agenda for Wed-Fri. If you’d like your topic discussed but won’t be able to attend the PTG in person, please make a note about it next to your topic and name when you add it. And provide enough context/detail so we can discuss your topic and add notes/action items/next steps to the etherpad for your review. Then, we can follow up asynchronously on the mailing list and/or IRC after the PTG. Best, -melanie From masayuki.igawa at gmail.com Thu Feb 15 06:29:34 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Thu, 15 Feb 2018 15:29:34 +0900 Subject: [openstack-dev] [all][infra] PTG Infra Helproom Info and Signup In-Reply-To: References: <1518561402.929387.1269899112.764A7EF6@webmail.messagingengine.com> Message-ID: <20180215062934.722b3bwswhsefkou@fastmail.com> On 02/14, Andrea Frittoli wrote: > On Wed, Feb 14, 2018 at 10:42 AM Thierry Carrez > wrote: > > > Clark Boylan wrote: > > > Last PTG the infra helproom seemed to work out for projects that knew > > about it. The biggest problem seemed to be that other projects either just > > weren't aware that there is/was an Infra helproom or didn't know when an > > appropriate time to show up would be. We are going to try a couple things > > this time around to try and address those issues. > > > > > > First of all the Infra team is hosting a helproom at the Dublin PTG. Now > > you should all know :) The idea is that if projects or individuals have > > questions for the infra team or problems that we can help you with there is > > time set aside specifically for this. I'm not sure what room we will be in, > > you will have to look at the map, but we have the entirety of Monday and > > Tuesday set aside for this. > > > > Also worth noting that it is a "project infrastructure" helproom, in the > > largest sense. It goes beyond the "Infra" team: you can bring any > > question around project support from horizontal support teams like QA, > > > > Indeed, thanks for pointing that out. > > A lot of us from the QA team will be in Dublin, available during the help > ours for questions or topics you may want to discuss. > There's usually enough time to sit down and hack a few things on the > spot... and there are enough infra/qa cores around to get things reviewed > and merged during the week. > > On the QA side we don't have an ethercalc (yet?) but if there are topics > you would like to discuss / develop please add something to the etherpad. > > Andrea Frittoli (andreaf) > > [1] https://etherpad.openstack.org/p/qa-rocky-ptg Yeah, actually, I was thinking to have a dedicated session for stestr Q&A if someone wants it. Does anybody want to join the stestr Q&A session? Or, we can talk about it during the PTG anytime :) -- Masayuki > > > > release management, requirements, stable team... > > > > -- > > Thierry Carrez (ttx) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From renat.akhmerov at gmail.com Thu Feb 15 06:36:51 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 15 Feb 2018 13:36:51 +0700 Subject: [openstack-dev] [mistral][release][ffe] Requesting FFE for supporting source execution id in the Mistral client In-Reply-To: <20180214155503.nogxlhigghgs3ijp@gentoo.org> References: <20180214155503.nogxlhigghgs3ijp@gentoo.org> Message-ID: <5701bcc4-a3f5-43e1-94fd-588768c996e8@Spark> Thanks! Renat Akhmerov @Nokia On 14 Feb 2018, 22:55 +0700, Matthew Thode , wrote: > On 18-02-14 17:03:10, Renat Akhmerov wrote: > > Hi, > > > > We were asked to do a FFE request to be able to release a new version of Mistral client out of stable/queens branch. > > > > The backport patch: https://review.openstack.org/#/c/543393/ > > The release patch: https://review.openstack.org/#/c/543402 > > > > The reason to do that after the feature freeze is that we didn’t backport (and release) this patch by mistake (simply missed it) whereas the corresponding functionality was already included on the server side and went to Queens-3 and subsequent releases. > > > > From my side I can assure that the change is backwards compatible and very much wanted in stable/queens by many users. > > > > Hence we’re kindly asking to approve the release patch. > > > > FFE approved from the requirements side. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 15 08:28:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Feb 2018 09:28:09 +0100 Subject: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading? In-Reply-To: References: Message-ID: Luo, Lujin wrote: > Can someone be nice enough to point me to the Rocky Fast Forward Upgrading etherpad page? > > I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad for it is not listed in [2]. Indeed, the etherpad is missing, and I realize we don't have anyone signed up yet to clearly lead that track... Is anyone interested in leading that track ? -- Thierry Carrez (ttx) From zigo at debian.org Thu Feb 15 08:31:19 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 15 Feb 2018 09:31:19 +0100 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens Message-ID: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> Hi, Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: https://review.openstack.org/544809 Cheers, Thomas Goirand (zigo) From lyarwood at redhat.com Thu Feb 15 09:02:55 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 15 Feb 2018 09:02:55 +0000 Subject: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading? In-Reply-To: References: Message-ID: <20180215090255.fi2hspruevbuga3w@lyarwood.usersys.redhat.com> On 15-02-18 02:19:48, Luo, Lujin wrote: > Hello everyone, > > Can someone be nice enough to point me to the Rocky Fast Forward Upgrading etherpad page? > > I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad for it is not listed in [2]. > > Thanks in advance. > > [1] https://www.openstack.org/ptg/#tab_schedule > [2] https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads Hello Lujin, My apologies, I created this a while ago and forgot to add it to the list and ask for input on the ML: https://etherpad.openstack.org/p/ffu-ptg-rocky I'll get this added to the list now and will send a separate note to the ML later today seeking additional input on the agenda. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From lyarwood at redhat.com Thu Feb 15 09:04:44 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 15 Feb 2018 09:04:44 +0000 Subject: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading? In-Reply-To: References: Message-ID: <20180215090444.c272xpbgtpa7ccyt@lyarwood.usersys.redhat.com> On 15-02-18 09:28:09, Thierry Carrez wrote: > Luo, Lujin wrote: > > Can someone be nice enough to point me to the Rocky Fast Forward Upgrading etherpad page? > > > > I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad for it is not listed in [2]. > > Indeed, the etherpad is missing, and I realize we don't have anyone > signed up yet to clearly lead that track... > > Is anyone interested in leading that track ? I did sign up a while ago, I've just failed to follow up during the last few weeks. I'll try to get things moving today. Regards, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From thierry at openstack.org Thu Feb 15 09:23:38 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Feb 2018 10:23:38 +0100 Subject: [openstack-dev] [ptg] ptgbot HOWTO for track leads Message-ID: <33e7cdca-a590-a6fb-8029-c3cf5204296a@openstack.org> Hi everyone, In two weeks some of us will congregate in Dublin for the 3rd OpenStack PTG. The event is made of several 'tracks' (organized around a specific team or a specific theme). Topics of discussions are loosely scheduled in those tracks, based on the needs of the attendance. This allows to maximize attendee productivity, but the downside is that it can make the event a bit confusing to navigate. To mitigate that issue, we are using an IRC bot to expose what's happening currently at the event at the following page: http://ptg.openstack.org/ptg.html As a track lead, it is imperative that you make use of the PTG bot to communicate what's happening. This is done by joining the #openstack-ptg IRC channel on Freenode and speaking commands to the bot. To indicate what's currently being discussed, you will use the track name hashtag (found in the "Scheduled tracks" section on the above page), with the 'now' command: #TRACK now Example: #swift now brainstorming improvements to the ring You can also mention other track names to make sure to get people attention when the topic is cross-project: #nova now discussing #cinder interactions There can only be one 'now' entry for a given track at a time. To indicate what will be discussed next, you can enter one or more 'next' commands: #TRACK next Example: #api-sig next at 2pm we'll be discussing pagination woes Note that in order to keep content current, entering a new 'now' command for a track will erase any 'next' entry for that track. Finally, if you want to clear all 'now' and 'next' entries for your track, you can issue the 'clean' command: #TRACK clean Example: #ironic clean For more information on the bot commands, please see: https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst Pro tip: designate someone tasked with updating the PTGbot with what's currently being discussed, so that you can focus on keeping the discussion on track. You can play with the bot in the coming week, data will be reset on the Sunday before the event starts. -- Thierry Carrez (ttx) From thierry at openstack.org Thu Feb 15 09:24:53 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Feb 2018 10:24:53 +0100 Subject: [openstack-dev] [ptg] Booking reservable rooms with the ptgbot Message-ID: <7f903c73-28b3-31e4-10e1-54b11f39ed61@openstack.org> Hi everyone, At every PTG we have additional reservable space for extra un-scheduled discussions, or smaller teams to take advantage of. In past PTGs we've been using an ethercalc document to book that space. The issue with that approach was that we were using two separate systems and it was not possible to use the PTG bot to update everyone on what was currently discussed in those reserved rooms. In Dublin we'll be using the PTG bot to book reservable space. The PTG bot page now shows which track is allocated to which room, as well as available space: http://ptg.openstack.org/ptg.html Available slots display a slot code (room name - time slot) that you can use to issue a 'book' command to the PTG bot on #openstack-ptg: #TRACK book Example: #relmgt book Coiste Bainisti-MonP2 Any track can book additional space and time using this system. All slots are 1h45-long. If your topic of discussion does not fall into an existing track, please ask PTG bot admins (ttx, diablo_rojo, infra...) to create a track for you (which they can do by getting op rights and issuing a ~add command). In Dublin some of the teams do not have any pre-scheduled space, and will solely be relying on this feature to book the time that makes the most sense for them. Those teams are Shade/OpenStackSDK (#sdk), OpenStackClient (#osc), Stable branch maintenance (#stable), Requirements (#requirements), Winstackers (#winstackers), Puppet OpenStack (#puppet), Dragonflow (#dragonflow), Release Management (#relmgt), and Rally (#rally). For more information on the bot commands, please see: https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst You can play with the bot in the coming week, data will be reset on the Sunday before the event starts. Any booking made will be removed then. -- Thierry Carrez (ttx) From thierry at openstack.org Thu Feb 15 09:35:38 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Feb 2018 10:35:38 +0100 Subject: [openstack-dev] [ptg] etherpad for Fast Forward Upgrading? In-Reply-To: <20180215090444.c272xpbgtpa7ccyt@lyarwood.usersys.redhat.com> References: <20180215090444.c272xpbgtpa7ccyt@lyarwood.usersys.redhat.com> Message-ID: Lee Yarwood wrote: > On 15-02-18 09:28:09, Thierry Carrez wrote: >> Luo, Lujin wrote: >>> Can someone be nice enough to point me to the Rocky Fast Forward Upgrading etherpad page? >>> >>> I am seeing Fast Forward Upgrading scheduled on Monday [1], but the etherpad for it is not listed in [2]. >> >> Indeed, the etherpad is missing, and I realize we don't have anyone >> signed up yet to clearly lead that track... >> >> Is anyone interested in leading that track ? > > I did sign up a while ago, I've just failed to follow up during the last > few weeks. I'll try to get things moving today. Oops, I knew there was someone, just failed to document it. Thanks Lee! (I badly need a vacation.) -- Thierry Carrez (ttx) From thierry at openstack.org Thu Feb 15 09:46:39 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Feb 2018 10:46:39 +0100 Subject: [openstack-dev] [Election] PTL Election Results & Conclusion In-Reply-To: References: Message-ID: <997afcdf-abb4-0599-3b96-dbca1f55534a@openstack.org> Kendall Nelson wrote: > Thank you to the electorate, to all those who voted and to all > candidates who put their name forward for Project Team Lead (PTL) in > this election. A healthy, open process breeds trust in our decision > making capability thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: [...] Congrats to all newly-elected PTLs, and thanks to the election officials for their service ! On the stats side, we renewed 17 of the 64 PTLs, so around 27%. Our usual renewal rate is more around 35%, but we did renew more at the last elections (40%) so this is likely why we didn't renew as much as usual this time. -- Thierry Carrez (ttx) From jtomasek at redhat.com Thu Feb 15 10:00:33 2018 From: jtomasek at redhat.com (Jiri Tomasek) Date: Thu, 15 Feb 2018 11:00:33 +0100 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 11:16 PM, Ben Nemec wrote: > > > On 02/09/2018 08:49 AM, Jiri Tomasek wrote: > >> *Step 2. network-environment -> NIC configs* >> >> Second step of network configuration is NIC config. For this >> network-environment.yaml is used which references NIC config templates >> which define network_config in their resources section. User is currently >> required to configure these templates manually. We would like to provide >> interactive view which would allow user to setup these templates using >> TripleO UI. A good example is a standalone tool created by Ben Nemec [3]. >> >> There is currently work aimed for Pike to introduce jinja templating for >> network environments and templates [4] (single-nic-with-vlans, >> bond-with-vlans) to support composable networks and roles (integrate data >> from roles_data.yaml and network_data.yaml) It would be great if we could >> move this one step further by using these samples as a starting point and >> let user specify full NIC configuration. >> >> Available information at this point: >> - list of roles and networks as well as which networks need to be >> configured at which role's NIC Config template >> - os-net-config schema which defines NIC configuration elements and >> relationships [5] >> - jinja templated sample NIC templates >> >> Requirements: >> - provide feedback to the user about networks assigned to role and have >> not been configured in NIC config yet >> > > I don't have much to add on this point, but I will note that because my UI > is standalone and pre-dates composable networks it takes the opposite > approach. As a user adds a network to a role, it exposes the configuration > for that network. Since you have the networks ahead of time, you can > obviously expose all of those settings up front and ensure the correct > networks are configured for each nic-config. > > I say this mostly for everyone's awareness so design elements of my tool > don't get copied where they don't make sense. > > - let user construct network_config section of NIC config templates for >> each role (brigdes/bonds/vlans/interfaces...) >> - provide means to assign network to vlans/interfaces and automatically >> construct network_config section parameter references >> > > So obviously your UI code is going to differ, but I will point out that > the code in my tool for generating the actual os-net-config data is > semi-standalone: https://github.com/cybertron/t > ripleo-scripts/blob/master/net_processing.py > > It's also about 600 lines of code and doesn't even handle custom roles or > networks yet. I'm not clear whether it ever will at this point given the > change in my focus. > > Unfortunately the input JSON schema isn't formally documented, although > the unit tests do include a number of examples. > https://github.com/cybertron/tripleo-scripts/blob/master/tes > t-data/all-the-things/nic-input.json covers quite a few different cases. > > - populate parameter definitions in NIC config templates based on >> role/networks assignment >> - populate parameter definitions in NIC config templates based on >> specific elements which use them e.g. BondInterfaceOvsOptions in case when >> ovs_bond is used >> > > I guess there's two ways to handle this - you could use the new jinja > templating to generate parameters, or you could handle it in the generation > code. > > I'm not sure if there's a chicken-and-egg problem with the UI generating > jinja templates, but that's probably the simplest option if it works. The > approach I took with my tool was to just throw all the parameters into all > the files and if they're unused then oh well. With jinja templating you > could do the same thing - just copy a single boilerplate parameter header > that includes the jinja from the example nic-configs and let the templating > handle all the logic for you. > > It would be cleaner to generate static templates that don't need to be > templated, but it would require re-implementing all of the custom network > logic for the UI. I'm not sure being cleaner is sufficient justification > for doing that. > > - store NIC config templates in deployment plan and reference them from >> network-environment.yaml >> >> Problems to solve: >> As a biggest problem to solve I see defining logic which would >> automatically handle assigning parameters to elements in network_config >> based on Network which user assigns to the element. For example: Using GUI, >> user is creating network_config for compute role based on >> network/config/multiple-nics/compute.yaml, user adds an interface and >> assigns the interface to Tenant network. Resulting template should then >> automatically populate addresses/ip_netmask: get_param: TenantIpSubnet. >> Question is whether all this logic should live in GUI or should GUI pass >> simplified format to Mistral workflow which will convert it to proper >> network_config format and populates the template with it. >> > > I guess the fact that I separated the UI and config generation code in my > tool is my answer to this question. I don't remember all of my reasons for > that design, but I think the main thing was to keep the input and > generation cleanly separated. Otherwise there was a danger of making a UI > change and having it break the generation process because they were tightly > coupled. Having a JSON interface between the two avoids a lot of those > problems. It also made it fairly easy to unit test the generation code, > whereas trying to mock out all of the UI elements would have been a fragile > nightmare. > > It does require a bunch of translation code[1], but a lot of it is fairly > boilerplate (just map UI inputs to JSON keys). > > 1: https://github.com/cybertron/tripleo-scripts/blob/171aedabfe > ad1f27f4dc0fce41a8b82da28923ed/net-iso-gen.py#L515 > > Hope this helps. Ben, thanks a lot for your input. I think this makes the direction with NIC configs clearer: 1. The generated template will include all possible parameters definitions unless we find a suitable way of populating parameters section part of template generation process. Note that current jinja templates for NIC config (e.g. network/config/multiple-nics/role.role.j2.yaml:127) create these definitions conditionally by specific role name which is not very elegant in terms of custom roles. 2. GUI is going to define forms to add/configure network elements (interface, bridge, bond, vlan, ...) and provide user friendly way to combine these together. The whole data construct (per Role) is going to be sent to tripleo-common workflow as json. Workflow consumes json input and produces final template yaml. I think we should be able to reuse bunch of the logic which Ben already created. Example: json input from GUI: ..., { type: 'interface', name: 'nic1', network_name_lower: 'external' },... transformed by tripleo-common: ... - type: interface name: nic{{loop.index + 1}} use_dhcp: false addresses: - ip_netmask: get_param: {{network.name}}IpSubnet ... With this approach, we'll create common API provided by Mistral to generate NIC config templates which can be reused by CLI and other clients, not TripleO UI specifically. Note that we will also need a 'reverse' Mistral workflow which is going to convert template yaml network_config into the input json format, so GUI can display current configuration to the user and let him change that. Liz has updated network configuration wireframes which can be found here https://lizsurette.github.io/OpenStack-Design/tripleo-ui/3-tripleo-ui-edge-cases/7.advancednetworkconfigurationandtopology . The goal is to provide a graphical network configuration overview and let user perform actions from it. This ensures that with every action performed, user immediately gets clear feedback on how does the network configuration look. -- Jirka > > > -Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eng.szaher at gmail.com Thu Feb 15 10:25:58 2018 From: eng.szaher at gmail.com (Saad Zaher) Date: Thu, 15 Feb 2018 10:25:58 +0000 Subject: [openstack-dev] [freezer] PTG planning Etherpad In-Reply-To: References: Message-ID: Hi Adam, Sorry I forgot the link in my first email, you can find it here [1]. Best Regards, Saad! [1] https://etherpad.openstack.org/p/freezer-ptg-rocky On Mon, Feb 12, 2018 at 2:51 PM, Adam Heczko wrote: > Hello Saad, I think you missed link to the [1] etherpad. > > On Mon, Feb 12, 2018 at 3:05 PM, Saad Zaher wrote: > >> Hello everyone, >> >> Please, if anyone is going to attend the next PTG in dublin check >> ehterpad [1] for discussion agenda. >> >> Feel free to add or comment on topics you want to discuss in this PTG. >> >> Please make sure to add your irc or name to participants section. >> >> >> Best Regards, >> Saad! >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Adam Heczko > Security Engineer @ Mirantis Inc. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- -------------------------- Best Regards, Saad! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob.ball at citrix.com Thu Feb 15 10:25:42 2018 From: bob.ball at citrix.com (Bob Ball) Date: Thu, 15 Feb 2018 10:25:42 +0000 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens Message-ID: <3d1bd71c96b3497a8034cc1d472096b6@AMSPEX02CL01.citrite.net> Hi Thomas, As noted on the patch, XenServer only has python 2 (and some versions of XenServer even has Python 2.4) in domain0. This is code that will not run in Debian (only in XenServer's dom0) and therefore can be ignored or removed from the Debian package. It's not practical to convert these to support python 3. Bob -----Original Message----- From: Thomas Goirand [mailto:zigo at debian.org] Sent: 15 February 2018 08:31 To: openstack-dev at lists.openstack.org Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens Hi, Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: https://review.openstack.org/544809 Cheers, Thomas Goirand (zigo) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dirk at dmllr.de Thu Feb 15 14:13:59 2018 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Thu, 15 Feb 2018 15:13:59 +0100 Subject: [openstack-dev] [requirements][release] FFE for tooz 1.60.0 Message-ID: Hi, I would like to ask for a exception to add tooz 1.60.0 to stable/queens. As part of the msgpack-python -> msgpack switch over we converted all dependencies, but the tooz release did not include the dependency switch (not sure why, the branch point was just before the fix). As it is a one liner dependency change and it brings everything in stable/queens in a consistent state related to the dependencies (and for those who try to package openstack msgpack and msgpack-python do file-conflict so can not be coinstalled) I would like to ask for a FFE. TIA, Dirk From e0ne at e0ne.info Thu Feb 15 14:33:00 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 15 Feb 2018 16:33:00 +0200 Subject: [openstack-dev] [horizon][plugins] Supported Django versions for Horizon in Rocky Message-ID: Hi team, As was discussed at the weekly meeting on December, 20th, Horizon team is going to bump minimum Django version to the next LTS release which is 1.11. Django 1.8 will be supported at least until April 2018 [2]. In Rocky release, we agreed to support Django 1.11 and Django 2.x as experimental [3]. We're going to drop Django <= 1.10 support early in Rocky [4] to get plugins maintainers more time to adapt their project to the latest supported Django. Unfortunately, we can't support both Django 1.8 and 2.0 because of Django's deprecations and removals. so we need to bump minimum version. Debian will switch the default Django version to 2.x soon. Although Django 2.0 is not LTS, it seems worth considering support of Django 2.0. We'll have continuation of this conversation at PTG [5]/ [1] http://eavesdrop.openstack.org/meetings/horizon/2017/horizon.2017-12-20-20.00.log.html#l-39 [2] https://www.djangoproject.com/download/ [3] https://blueprints.launchpad.net/horizon/+spec/django2-support [4] https://review.openstack.org/#/q/topic:bp/django2-support+(status:open+OR+status:merged) [5] https://etherpad.openstack.org/p/horizon-ptg-rocky Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Thu Feb 15 15:05:45 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 15 Feb 2018 15:05:45 +0000 Subject: [openstack-dev] [masakari] [masakari-monitors] : Intrusive Instance Monitoring through QEMU Guest Agent Design Update Message-ID: <47EFB32CD8770A4D9590812EE28C977E9624F0FD@ALA-MBD.corp.ad.wrs.com> We submitted the first implementation patch for the following blueprint https://blueprints.launchpad.net/openstack/?searchtext=intrusive-instance-monitoring i.e. https://review.openstack.org/#/c/534958/ The second patch will be pushed within a week time or so. One item we would like to seek clarification among the community is about how we should integrate the notification within the masakari engine. One option is to reuse what has been defined at masakari/engine/instance_events.py. e.g. def masakari_notifier(self, domain_uuid): if self.getJournalObject(domain_uuid).getSentNotification(): LOG.debug('notifier.send_notification Skipped:' + domain_uuid) else: hostname = socket.gethostname() noticeType = ec.EventConstants.TYPE_VM current_time = timeutils.utcnow() event = { 'notification': { 'type': noticeType, 'hostname': hostname, 'generated_time': current_time, 'payload': { 'event': 'LIFECYCLE', 'instance_uuid': domain_uuid, 'vir_domain_event': 'STOPPED_FAILED' } } } LOG.debug(str(event)) self.notifier.send_notification(CONF.callback.retry_max, CONF.callback.retry_interval, event) self.getJournalObject(domain_uuid).setSentNotification(True) Should we 1. define a new type of event for Intrusive Instance monitoring or 2. add a new event within the INSTANCE_EVENTS as we may eventually integrate with instance monitoring or 3. simply reuse the LIFECYCLE/STOPPED_FAILED event ( which is what we are implementing for now.) One of our reference test case is to detect application meltdown within VM which QEMU may not aware the failure. The recovery should pretty much be the same as LIFECYCLE/STOPPED_FAILED event. What do you think? Thanks. Louie Ntoe: Here is what we got from masakari/engine/instance_events.py These are the events which needs to be processed by masakari in case of instance recovery failure. """ INSTANCE_EVENTS = { # Add more events and vir_domain_events here. 'LIFECYCLE': ['STOPPED_FAILED'], 'IO_ERROR': ['IO_ERROR_REPORT'] } -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Thu Feb 15 15:43:08 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Thu, 15 Feb 2018 15:43:08 +0000 Subject: [openstack-dev] [Election] PTL Election Results & Conclusion In-Reply-To: <997afcdf-abb4-0599-3b96-dbca1f55534a@openstack.org> References: <997afcdf-abb4-0599-3b96-dbca1f55534a@openstack.org> Message-ID: <1B02A3C7-3E09-44CE-A694-FE046C5F7196@cisco.com> Agreed! Congratulations to all our newly elected PTLs. For past PTLs, thanks a bunch for your service! The election process I'm certain is very difficult to execute, and as a community member, I'd like to thank the election officials for their work. Cheers -steve On 2/15/18, 2:47 AM, "Thierry Carrez" wrote: Kendall Nelson wrote: > Thank you to the electorate, to all those who voted and to all > candidates who put their name forward for Project Team Lead (PTL) in > this election. A healthy, open process breeds trust in our decision > making capability thank you to all those who make this process possible. > > Now for the results of the PTL election process, please join me in > extending congratulations to the following PTLs: [...] Congrats to all newly-elected PTLs, and thanks to the election officials for their service ! On the stats side, we renewed 17 of the 64 PTLs, so around 27%. Our usual renewal rate is more around 35%, but we did renew more at the last elections (40%) so this is likely why we didn't renew as much as usual this time. -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bodenvmw at gmail.com Thu Feb 15 16:09:56 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 15 Feb 2018 09:09:56 -0700 Subject: [openstack-dev] [neutron] Proper usage of neutron extensions/modules Message-ID: If your networking project is using neutron/neutron-lib, please read on. SUMMARY: If you're using neutron or neutron-lib code (for example extensions), please ensure you import/use the respective attributes of those modules rather than duplicate such values (str constants and such). DETAILS: To fully consume neutron-lib changes; the respective code that was rehomed into lib is removed from neutron once all consumers using stable branches are updated to use lib (instead of neutron). In order to find such consumers we generally search [1] for who imports the respective modules of interest. This allows us to update the consumers and ensure they don't break once we remove the code from neutron. The implication is that if consumers are using (depending on) the neutron code, but never import it, they are missed in this process and can end up with breakage when we remove the code from neutron. A recent example of such includes [2] mentioned on [3]. An example of what's being asked for can be found in [4]. ACTION: If neutron consumers could please inspect their code to ensure they are declaring their intent to use neutron with 'imports' and also use neutron module attributes were applicable, we can minimize the number of breakages that occur in this process. Feel free to reach out to me (boden) on openstack-neutron with any questions/comments. Thank you! [1] http://codesearch.openstack.org/ [2] https://review.openstack.org/#/c/544179/ [3] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127385.html [4] https://bugs.launchpad.net/kuryr-libnetwork/+bug/1749594 From andr.kurilin at gmail.com Thu Feb 15 16:42:34 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 15 Feb 2018 18:42:34 +0200 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: +1 Takashi, thanks for your contribution! 2018-02-11 4:48 GMT+02:00 Alex Xu : > +1 > > 2018-02-09 23:01 GMT+08:00 Matt Riedemann : > >> I'd like to add Takashi to the python-novaclient core team. >> >> python-novaclient doesn't get a ton of activity or review, but Takashi >> has been a solid reviewer and contributor to that project for quite awhile >> now: >> >> http://stackalytics.com/report/contribution/python-novaclient/180 >> >> He's always fast to get new changes up for microversion support and help >> review others that are there to keep moving changes forward. >> >> So unless there are objections, I'll plan on adding Takashi to the >> python-novaclient-core group next week. >> >> -- >> >> Thanks, >> >> Matt >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Feb 15 16:42:50 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Feb 2018 10:42:50 -0600 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard In-Reply-To: <34034e37-9372-c3d2-eace-e8285d5f2549@redhat.com> References: <34034e37-9372-c3d2-eace-e8285d5f2549@redhat.com> Message-ID: <90dfea59-ef77-9020-015e-2936372ea422@nemebean.com> Re-sending from the account I'm subscribed with. On 02/15/2018 10:40 AM, Ben Nemec wrote: > > > On 02/15/2018 04:00 AM, Jiri Tomasek wrote: >> With this approach, we'll create common API provided by Mistral to >> generate NIC config templates which can be reused by CLI and other >> clients, not TripleO UI specifically. Note that we will also need a >> 'reverse' Mistral workflow which is going to convert template yaml >> network_config into the input json format, so GUI can display current >> configuration to the user and let him change that. > > Oh, that reminds me: there were some things that I needed to store for > GUI use that aren't represented in the output templates.  That's why my > tool writes out a pickle file that contains the intermediate data > format, and when it loads a set of templates it actually reads that > pickle file, not the templates themselves. > > I don't recall all of the bits, but at a glance I see that I had stored > values for auto_routes, ipv6, and version.  You can probably ignore > version since you'll only ever need to support version 2, and it's > possible you could derive the other two values based on the values in > the templates.  It would require some fuzzy logic though, I think. > > I believe there was also some one-way transformation done on the data > when converting the JSON data to templates.  I'm not sure whether that > could be reversed from just the templates.  It would certainly be more > complex to do it that way. > > Just something to keep in mind.  Especially because these templates will > mostly never be seen by end-users since they'll go straight into the > plan, it may be much simpler to store intermediate data alongside the > templates and use that for populating the GUI. From msm at redhat.com Thu Feb 15 16:53:56 2018 From: msm at redhat.com (michael mccune) Date: Thu, 15 Feb 2018 11:53:56 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <7271de3f-6b92-a8a4-c1fb-180ffbbae8cb@redhat.com> Greetings OpenStack community, Today's meeting was brief and primarily covered planning for the PTG. Here's a quick recap. We began by continuing to discuss the bug [8] that is the result of the Nova API not properly including caching information in the headers of its replies. Dmitry Tantsur has added comments to the bug and the SIG will most likely have more input on that report. The SIG has agreed with the position described by Chris Dent in the bug report, that this is bug and should be remedied as soon as possible. If you have thoughts on this, please add your perspective to that bug report. Next we reviewed the etherpad[7] of topics for the upcoming PTG, with the entire group taking an action item to prioritize the issues. If you are interested in the topics listed on that etherpad, we invite you to please add a "+1" next to anything that you would like to discuss, or a -1 if you don't think that topic deserves discussion time. Lastly, there was some talk of an informal API-SIG meetup for the PTG with no firm plans confirmed. Beers may or may not have been discussed. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-ptg-rocky [8] https://bugs.launchpad.net/nova/+bug/1747935 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From Louie.Kwan at windriver.com Thu Feb 15 17:08:33 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 15 Feb 2018 17:08:33 +0000 Subject: [openstack-dev] [automaton] How to extend automaton? In-Reply-To: <5A83D1C4.9070102@fastmail.com> References: <47EFB32CD8770A4D9590812EE28C977E9624DBF4@ALA-MBD.corp.ad.wrs.com>, <5A83D1C4.9070102@fastmail.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E9624F2A4@ALA-MBD.corp.ad.wrs.com> Thanks for the reply. I will take the subclass approach for now. It will be nice if we can dynamically register additional info or even a callback function after building a machine from a state space listing. -LK ________________________________________ From: Joshua Harlow [harlowja at fastmail.com] Sent: Wednesday, February 14, 2018 1:05 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [automaton] How to extend automaton? As far a 1, I'd recommend just use functools.partial or make an object with all the extra stuff u want and have that object provide a __call__ method. As far as 2, you might have to subclass the FSM baseclass and add those into the internal data-structure (same for 3 I think); ie this one @ https://github.com/openstack/automaton/blob/master/automaton/machines.py#L186-L191 Of course feel free to do it differently and submit a patch that folks (myself and others) can review. -Josh Kwan, Louie wrote: > https://github.com/openstack/automaton > > Friendly state machines for python. > > A few questions about automaton. > > 1.I would like to know can we addition parameters on on_enter or on_exit > callbacks. Right now, it seems it only allows state and triggered_event. > > a.I have many FSM running for different objects and it is much easier if > I can pass on the some sort of ID back to the callbacks. > > 2.Can we or how can we store extra attribute like last state change > *timestamp*? > > 3.Can we store additional identify info for the FSM object? Would like > to add an */UUID/* > > Thanks. > > Louie > > def print_on_enter(new_state, triggered_event): > > print("Entered '%s' due to '%s'" % (new_state, triggered_event)) > > def print_on_exit(old_state, triggered_event): > > print("Exiting '%s' due to '%s'" % (old_state, triggered_event)) > > # This will contain all the states and transitions that our machine will > > # allow, the format is relatively simple and designed to be easy to use. > > state_space = [ > > { > > 'name': 'stopped', > > 'next_states': { > > # On event 'play' transition to the 'playing' state. > > 'play': 'playing', > > 'open_close': 'opened', > > 'stop': 'stopped', > > }, > > 'on_enter': print_on_enter, > > 'on_exit': print_on_exit, > > }, > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Feb 15 17:18:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Feb 2018 11:18:46 -0600 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: References: Message-ID: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> On 2/9/2018 9:01 AM, Matt Riedemann wrote: > I'd like to add Takashi to the python-novaclient core team. > > python-novaclient doesn't get a ton of activity or review, but Takashi > has been a solid reviewer and contributor to that project for quite > awhile now: > > http://stackalytics.com/report/contribution/python-novaclient/180 > > He's always fast to get new changes up for microversion support and help > review others that are there to keep moving changes forward. > > So unless there are objections, I'll plan on adding Takashi to the > python-novaclient-core group next week. I've added Takashi to python-novaclient-core: https://review.openstack.org/#/admin/groups/572,members Thanks everyone. -- Thanks, Matt From andrea.frittoli at gmail.com Thu Feb 15 17:28:29 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 15 Feb 2018 17:28:29 +0000 Subject: [openstack-dev] [QA][stable] Py3 integration jobs on stable Message-ID: Dear all, since it's now RC1 time, C1 we're setting up CI jobs for stable branches and periodic-stable jobs for stable/queens. In the past, we used to run the py27 based tempest-full integration job (legacy-tempest-dsvm-neutron-full). With all the effort that went into py3 support, I think it's time to start running the py3 integration job as well against stable branches. The integrated-gate-py35 [1] already includes stable/queens. I proposed a patch for the periodic-stable pipeline [2]. Please let me know if you have any concern. Andrea Frittoli (andreaf) [1] https://github.com/openstack-infra/openstack-zuul-jobs/blob/master/zuul.d/project-templates.yaml#L1008 [2] https://review.openstack.org/#/c/521888/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Thu Feb 15 17:39:09 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 15 Feb 2018 19:39:09 +0200 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> References: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> Message-ID: \o/ Welcome to the team! 2018-02-15 19:18 GMT+02:00 Matt Riedemann : > On 2/9/2018 9:01 AM, Matt Riedemann wrote: > >> I'd like to add Takashi to the python-novaclient core team. >> >> python-novaclient doesn't get a ton of activity or review, but Takashi >> has been a solid reviewer and contributor to that project for quite awhile >> now: >> >> http://stackalytics.com/report/contribution/python-novaclient/180 >> >> He's always fast to get new changes up for microversion support and help >> review others that are there to keep moving changes forward. >> >> So unless there are objections, I'll plan on adding Takashi to the >> python-novaclient-core group next week. >> > > I've added Takashi to python-novaclient-core: > > https://review.openstack.org/#/admin/groups/572,members > > Thanks everyone. > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kumarmn at us.ibm.com Thu Feb 15 18:15:29 2018 From: kumarmn at us.ibm.com (Manoj Kumar) Date: Thu, 15 Feb 2018 12:15:29 -0600 Subject: [openstack-dev] [trove] PTG planning, weekly meeting for Trove In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> Message-ID: I would encourage everyone who is interested in providing input into the Rocky planning cycle for Trove to put their ideas into the etherpad at: https://etherpad.openstack.org/p/trove-ptg-rocky There are a good number of topics posted already. We would welcome input from operators as well. We are planning to meet remotely using Skype. If you are interested in participating in the discussions do add your Skype ID as well. As we prepare for the PTG, there will be no weekly meeting next week. - Manoj -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Thu Feb 15 18:38:15 2018 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 15 Feb 2018 19:38:15 +0100 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: <3d1bd71c96b3497a8034cc1d472096b6@AMSPEX02CL01.citrite.net> References: <3d1bd71c96b3497a8034cc1d472096b6@AMSPEX02CL01.citrite.net> Message-ID: 2018-02-15 11:25 GMT+01:00 Bob Ball : > Hi Thomas, > > As noted on the patch, XenServer only has python 2 (and some versions of XenServer even has Python 2.4) in domain0. This is code that will not run in Debian (only in XenServer's dom0) and therefore can be ignored or removed from the Debian package. > It's not practical to convert these to support python 3. > > Bob > We're not there yet but we also plan to work on migrating RDO to Python 3. And I have to disagree, this code is called by other projects and their tests, so it will likely be an impediment in migrating OpenStack to Python 3, not just a "packaging" issue. If this code is meant to run on Dom0, fine, then we won't package it, but we also have to decouple that dependency from Nova, Neutron, Ceilometer etc... to either communicate directly through an API endpoint or a light wrapper around it. Regards, H. > -----Original Message----- > From: Thomas Goirand [mailto:zigo at debian.org] > Sent: 15 February 2018 08:31 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens > > Hi, > > Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > https://review.openstack.org/544809 > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bob.ball at citrix.com Thu Feb 15 19:02:07 2018 From: bob.ball at citrix.com (Bob Ball) Date: Thu, 15 Feb 2018 19:02:07 +0000 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: References: <3d1bd71c96b3497a8034cc1d472096b6@AMSPEX02CL01.citrite.net>, Message-ID: 7:2205 Hi, > If this code is meant to run on Dom0, fine, then we won't package it, > but we also have to decouple that dependency from Nova, Neutron, > Ceilometer etc... to either communicate directly through an API > endpoint or a light wrapper around it. There is already a light wrapper here - other parts of os-xenapi provide the API to Nova/Neutron/etc which make calls through to the plugins in Dom0. These projects should now know nothing about the actual plugins or how they are called. Bob ________________________________ From: Haïkel Sent: Thursday, 15 February 2018 6:39 p.m. To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens 2018-02-15 11:25 GMT+01:00 Bob Ball : > Hi Thomas, > > As noted on the patch, XenServer only has python 2 (and some versions of XenServer even has Python 2.4) in domain0. This is code that will not run in Debian (only in XenServer's dom0) and therefore can be ignored or removed from the Debian package. > It's not practical to convert these to support python 3. > > Bob > We're not there yet but we also plan to work on migrating RDO to Python 3. And I have to disagree, this code is called by other projects and their tests, so it will likely be an impediment in migrating OpenStack to Python 3, not just a "packaging" issue. If this code is meant to run on Dom0, fine, then we won't package it, but we also have to decouple that dependency from Nova, Neutron, Ceilometer etc... to either communicate directly through an API endpoint or a light wrapper around it. Regards, H. > -----Original Message----- > From: Thomas Goirand [mailto:zigo at debian.org] > Sent: 15 February 2018 08:31 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens > > Hi, > > Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > https://review.openstack.org/544809 > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 15 19:09:41 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Feb 2018 19:09:41 +0000 Subject: [openstack-dev] [Election] Process Tweaks Message-ID: Hello Everyone, Over the last few elections, with changes to the election cycle (i.e. the separation of PTL and TC elections not being back to back), the scripts in place have become somewhat outdated and brittle. A few days ago after fixing a number of candidates names in an exceptions file[1] due to incorrect information given to the docs build by a gerrit lookup function, we had a conversation about how to fix this and other issues. The lengthy discussion expanded from how to improve the processes for both generation of the governance docs with correct candidate names to the validation of candidates when nominations are posted to Gerrit. Basically, we are proposing several changes to the scripts that exist and changes to how nominations are submitted. 1. Uncouple the TC and PTL election processes. Make changes to our tooling to validate PTL candidates and make those separate from the changes to validate TC candidates. 2. Change the how-to-submit-candidacy directions to require the candidate's email address (matching in Gerrit and foundation member profile) as the file name of their nomination. All other info (name, IRC nick, etc.) should be set in the foundation member profile. This could also mean a reformatting the nomination submission altoghether to be YAML or JSON (open for debate). 3. Create separate jobs for both docs build and candidate validation (and run separate validation functions for TC elections versus PTL elections). Please feel free to raise comments, concerns, or better ideas! The plan is to schedule time at the PTG to start hacking on some of these items so feedback before then would be fantastic! - Your Friendly Neighborhood Election Officials 1: http://git.openstack.org/cgit/openstack/election/tree/exceptions.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Thu Feb 15 19:22:16 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 15 Feb 2018 20:22:16 +0100 Subject: [openstack-dev] [TripleO][CI] Validating HA on upstream Message-ID: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> TL;DR: we would like to change the way HA is tested upstream to avoid being hitten by evitable bugs that the CI process should discover. Long version: Today HA testing in upstream consist only in verifying that a three controllers setup comes up correctly and can spawn an instance. That's something, but it’s far from being enough since we continuously see "day two" bugs. We started covering this more than a year ago in internal CI and today also on rdocloud using a project named tripleo-quickstart-utils [1]. Apart from his name, the project is not limited to tripleo-quickstart, it covers three principal roles: 1 - stonith-config: a playbook that can be used to automate the creation of fencing devices in the overcloud; 2 - instance-ha: a playbook that automates the seventeen manual steps needed to configure instance HA in the overcloud, test them via rally and verify that instance HA works; 3 - validate-ha: a playbook that runs a series of disruptive actions in the overcloud and verifies it always behaves correctly by deploying a heat-template that involves all the overcloud components; To make this usable upstream, we need to understand where to put this code. Here some choices: 1 - tripleo-validations: the most logical place to put this, at least looking at the name, would be tripleo-validations. I've talked with some of the folks working on it, and it came out that the meaning of tripleo-validations project is not doing disruptive tests. Integrating this stuff would be out of scope. 2 - tripleo-quickstart-extras: apart from the fact that this is not something meant just for quickstart (the project supports infrared and "plain" environments as well) even if we initially started there, in the end, it came out that nobody was looking at the patches since nobody was able to verify them. The result was a series of reviews stuck forever. So moving back to extras would be a step backward. 3 - Dedicated project (tripleo-ha-utils or just tripleo-utils): like for tripleo-upgrades or tripleo-validations it would be perfect having all this grouped and usable as a standalone thing. Any integration is possible inside the playbook for whatever kind of test. Today we're using the bash framework to interact with the cluster, rally to test instance-ha and Ansible itself to simulate full power outage scenarios. There's been a lot of talk about this during the last PTG [2], and unfortunately, I'll not be part of the next one, but I would like to see things moving on this side. Everything I wrote is of course up to discussion, that's precisely the meaning of this mail. Thanks to all who'll give advice, suggestions, and thoughts about all this stuff. [1] https://github.com/redhat-openstack/tripleo-quickstart-utils [2] https://etherpad.openstack.org/p/qa-queens-ptg-destructive-testing -- Raoul Scarazzini rasca at redhat.com From anteaya at anteaya.info Thu Feb 15 19:27:47 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 15 Feb 2018 14:27:47 -0500 Subject: [openstack-dev] [Election] Process Tweaks In-Reply-To: References: Message-ID: On 2018-02-15 02:09 PM, Kendall Nelson wrote: > Hello Everyone, > > Over the last few elections, with changes to the election cycle (i.e. the > separation of PTL and TC elections not being back to back), the scripts in > place have become somewhat outdated and brittle. > > A few days ago after fixing a number of candidates names in an exceptions > file[1] due to incorrect information given to the docs build by a gerrit > lookup function, we had a conversation about how to fix this and other > issues. The lengthy discussion expanded from how to improve the processes > for both generation of the governance docs with correct candidate names to > the validation of candidates when nominations are posted to Gerrit. > > Basically, we are proposing several changes to the scripts that exist and > changes to how nominations are submitted. > > 1. Uncouple the TC and PTL election processes. Make changes to our tooling > to validate PTL candidates and make those separate from the changes to > validate TC candidates. > > 2. Change the how-to-submit-candidacy directions to require the candidate's > email address (matching in Gerrit and foundation member profile) as the > file name of their nomination. All other info (name, IRC nick, etc.) should > be set in the foundation member profile. This could also mean a > reformatting the nomination submission altoghether to be YAML or JSON (open > for debate). > > 3. Create separate jobs for both docs build and candidate validation (and > run separate validation functions for TC elections versus PTL elections). > > Please feel free to raise comments, concerns, or better ideas! Please ensure that all who nominate themselves know that the process is meant as a means of communication not as a blockade to standing for nomination. That the process of nominating themselves has a back up solution, for example the low tech send-an-email-to-dev, should something untoward happen whilst they are working through the automated process in order to meet the deadline. Technical glitches and failures do happen and should be acknowledged as such, not allowing them to prevent someone from standing should they wish to stand. Thanks, Anita. > > The plan is to schedule time at the PTG to start hacking on some of these > items so feedback before then would be fantastic! > > - Your Friendly Neighborhood Election Officials > > 1: http://git.openstack.org/cgit/openstack/election/tree/exceptions.txt > > From prometheanfire at gentoo.org Thu Feb 15 19:38:45 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Feb 2018 13:38:45 -0600 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> Message-ID: <20180215193845.paju37mbcvizmbpq@gentoo.org> On 18-02-15 09:31:19, Thomas Goirand wrote: > Hi, > > Since I'm getting some pressure from other DDs to actively remove Py2 > support from my packages, I'm very much considering switching all of the > Debian packages for Queens to using exclusively Py3. I would have like > to read some opinions about this. Is it a good time for such move? I > hope it is, because I'd like to maintain as few Python package with Py2 > support at the time of Debian Buster freeze. > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 > only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > https://review.openstack.org/544809 > Gentoo has Openstack packaged for both python2.7 and python3.(5,6) for pike. Queens will be the same for us at least, but I haven't had problems with at least the core services running them all through python3.x. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Thu Feb 15 19:40:01 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Feb 2018 13:40:01 -0600 Subject: [openstack-dev] [requirements][release] FFE for tooz 1.60.0 In-Reply-To: References: Message-ID: <20180215194001.zcwczzhatfno4hyk@gentoo.org> On 18-02-15 15:13:59, Dirk Müller wrote: > Hi, > > I would like to ask for a exception to add tooz 1.60.0 to > stable/queens. As part of the msgpack-python -> msgpack switch over we > converted all > dependencies, but the tooz release did not include the dependency > switch (not sure why, the branch point was just before the fix). > > As it is a one liner dependency change and it brings everything in > stable/queens in a consistent state related to the dependencies (and > for > those who try to package openstack msgpack and msgpack-python do > file-conflict so can not be coinstalled) I would like to ask for a > FFE. > +2+W from requirements -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kurt.r.taylor at gmail.com Thu Feb 15 19:48:53 2018 From: kurt.r.taylor at gmail.com (Kurt Taylor) Date: Thu, 15 Feb 2018 13:48:53 -0600 Subject: [openstack-dev] [kolla] role change and introductions Message-ID: My downstream responsibilities have shifted over the last few months and it probably comes as no surprise that I am not going to be able to be as involved in the kolla project, including being the doc liaison. I'm having to remove myself from that role and will also not be attending PTG. The Kolla team has made great strides in improving the documentation, keep it going! Second, there will be 2 others from my ppc64le team getting involved in Kolla, Mark Hamzy and Ed Leafe. Ed will be attending PTG and will try to get a chance to meet a few of you there. Kurt Taylor (krtaylor) -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Feb 15 19:50:47 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Feb 2018 13:50:47 -0600 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: <20180215193845.paju37mbcvizmbpq@gentoo.org> References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> <20180215193845.paju37mbcvizmbpq@gentoo.org> Message-ID: <20180215195047.pxkfktdxkepihvoo@gentoo.org> On 18-02-15 13:38:45, Matthew Thode wrote: > On 18-02-15 09:31:19, Thomas Goirand wrote: > > Hi, > > > > Since I'm getting some pressure from other DDs to actively remove Py2 > > support from my packages, I'm very much considering switching all of the > > Debian packages for Queens to using exclusively Py3. I would have like > > to read some opinions about this. Is it a good time for such move? I > > hope it is, because I'd like to maintain as few Python package with Py2 > > support at the time of Debian Buster freeze. > > > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 > > only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > > > https://review.openstack.org/544809 > > > > Gentoo has Openstack packaged for both python2.7 and python3.(5,6) for > pike. Queens will be the same for us at least, but I haven't had > problems with at least the core services running them all through > python3.x. > Edit: Everything BUT swift... -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From marcin.juszkiewicz at linaro.org Thu Feb 15 20:41:24 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 15 Feb 2018 21:41:24 +0100 Subject: [openstack-dev] [kolla] role change and introductions In-Reply-To: References: Message-ID: <064168f7-5272-3863-25ad-9959234ff930@linaro.org> W dniu 15.02.2018 o 20:48, Kurt Taylor pisze: > My downstream responsibilities have shifted over the last few months > and it probably comes as no surprise that I am not going to be able > to be as involved in the kolla project, including being the doc > liaison. I'm having to remove myself from that role and will also not > be attending PTG. The Kolla team has made great strides in improving > the documentation, keep it going! Sad to see you leaving man. But such is life and work duties. > Second, there will be 2 others from my ppc64le team getting involved > in Kolla, Mark Hamzy and Ed Leafe. Ed will be attending PTG and will > try to get a chance to meet a few of you there. Please tell him that I would like to chat about ppc64le stuff ;D From tpb at dyncloud.net Thu Feb 15 21:16:21 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 15 Feb 2018 16:16:21 -0500 Subject: [openstack-dev] [manila] PTG schedule and social event Message-ID: <20180215211621.xtsmdji36old4onn@barron.net> Manila sessions at the PTG are scheduled for Tuesday and Friday. Ben Swartzlander and I worked together to distribute topics across the two days and you can see the results here: https://etherpad.openstack.org/p/manila-rocky-ptg We'll of course end up shifting topics and times around as required, but this will be our starting point. So look it over and if anything has a time slot that just won't work for you, let me know and we'll likely be able to adjust. Also, if you have a topic and it's not on the etherpad, add it under "Proposed Topics" and give me a ping. Finally, we're planning a social event for Tuesday evening, so stay tuned for more details. If you think you can join us, please add your name to the etherpad above under the "Team Dinner Planned" section. -- Tom Barron -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Louie.Kwan at windriver.com Thu Feb 15 21:16:14 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 15 Feb 2018 21:16:14 +0000 Subject: [openstack-dev] [masakari] [notification api] How to clean up or purging of records Message-ID: <47EFB32CD8770A4D9590812EE28C977E9624F65F@ALA-MBD.corp.ad.wrs.com> Hi All, Just wondering, how can we clean up the masakari notification list or purging all old records in the DB? openstack notification list returns too many old records During semi-auto testing, I created a long list of history of records and would like to clean it up and avoid unnecessary actions. Any short term solution is what I am looking for and/or ideas how to extend the CLI is also welcomed so that some of us can extend it later. Thanks, Louie -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Thu Feb 15 21:24:31 2018 From: eumel at arcor.de (Frank Kloeker) Date: Thu, 15 Feb 2018 22:24:31 +0100 Subject: [openstack-dev] [Election] Process Tweaks In-Reply-To: References: Message-ID: <9c8423d191167e4c9811f2740f6a3b2b@arcor.de> Am 2018-02-15 20:09, schrieb Kendall Nelson: > Hello Everyone, > > Over the last few elections, with changes to the election cycle (i.e. > the separation of PTL and TC elections not being back to back), the > scripts in place have become somewhat outdated and brittle. [...] > 3. Create separate jobs for both docs build and candidate validation > (and run separate validation functions for TC elections versus PTL > elections). > > Please feel free to raise comments, concerns, or better ideas! > > The plan is to schedule time at the PTG to start hacking on some of > these items so feedback before then would be fantastic! Hi Kendall, we have this in developement for I18n Extra-ATC collection on [1], the generated stats on [2]. There is one task with validation openstackid, which validated the given email address. Problem is here, translators using different email addresses for Zanata and it's not possible to validate the user with his name. Difficult. PTG hacking session would be nice. Wednesday/Thursday afternoon? kind regards Frank [1] https://review.openstack.org/#/c/531600/7/playbooks/generate_atc.yml [2] https://wiki.openstack.org/wiki/I18nTeam/ATC_statistics#Queens_cycle_.282017-07-01_to_2018-01-10.29 From ed at leafe.com Thu Feb 15 21:34:56 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Feb 2018 15:34:56 -0600 Subject: [openstack-dev] [kolla] role change and introductions In-Reply-To: <064168f7-5272-3863-25ad-9959234ff930@linaro.org> References: <064168f7-5272-3863-25ad-9959234ff930@linaro.org> Message-ID: On Feb 15, 2018, at 2:41 PM, Marcin Juszkiewicz wrote: > >> Second, there will be 2 others from my ppc64le team getting involved >> in Kolla, Mark Hamzy and Ed Leafe. Ed will be attending PTG and will >> try to get a chance to meet a few of you there. > > Please tell him that I would like to chat about ppc64le stuff ;D Of course, as long as you don’t expect me to know very much! :) -- Ed Leafe From kennelson11 at gmail.com Thu Feb 15 21:56:25 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Feb 2018 21:56:25 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Updates! So, we have gotten permission to do photos down on the pitch at the stadium which is awesome! The only issue is that we need to condense into a more dense blocks (Tuesday afternoon or Thursday morning) so looking at the schedule we have to move some teams. If the following teams could move their times so that we can make this happen: - QA - SIG K8s - Cyborg - Neutron - Octavia - Requirements - Release Mgmt - OpenStack Ansible - Cinder I'm really sorry to make you guys move, but since we need to pay for an escort (with a 4 hour minimum) and don't want to conflict with lunch, we need to shift. We will have your team meet at registration at your selected time. Because we get to go on the pitch and this requires an escort, you NEED TO BE AT REG ON TIME OR EARLY. If you aren't, you will miss your chance to be in the photo. I will send out a reminder on Monday of PTG week. -Kendall (diablo_rojo) On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson wrote: > This link might work better for everyone: > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > -Kendall (diablo_rojo) > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > wrote: > >> Hello PTLs and SIG Chairs! >> >> So here's the deal, we have 50 spots that are first come, first >> served. We have slots available before and after lunch both Tuesday and >> Thursday. >> >> The google sheet here[1] should be set up so you have access to edit, but >> if you can't for some reason just reply directly to me and I can add your >> team to the list (I need team/sig name and contact email). >> >> I will be locking the google sheet on *Monday February 26th so I need to >> know if your team is interested by then. * >> >> See you soon! >> >> - Kendall Nelson (diablo_rojo) >> >> [1] >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Thu Feb 15 22:06:06 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 15 Feb 2018 22:06:06 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: I set Thursday 12:00-12:10 for the QA team. Andrea Frittoli (andreaf) On Thu, Feb 15, 2018 at 9:56 PM Kendall Nelson wrote: > Updates! > > So, we have gotten permission to do photos down on the pitch at the > stadium which is awesome! > > The only issue is that we need to condense into a more dense blocks > (Tuesday afternoon or Thursday morning) so looking at the schedule we have > to move some teams. If the following teams could move their times so that > we can make this happen: > > - QA > - SIG K8s > - Cyborg > - Neutron > - Octavia > - Requirements > - Release Mgmt > - OpenStack Ansible > - Cinder > > I'm really sorry to make you guys move, but since we need to pay for an > escort (with a 4 hour minimum) and don't want to conflict with lunch, we > need to shift. > > We will have your team meet at registration at your selected time. Because > we get to go on the pitch and this requires an escort, you NEED TO BE AT > REG ON TIME OR EARLY. If you aren't, you will miss your chance to be in the > photo. > > I will send out a reminder on Monday of PTG week. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > >> This link might work better for everyone: >> >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >> >> -Kendall (diablo_rojo) >> >> >> On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson >> wrote: >> >>> Hello PTLs and SIG Chairs! >>> >>> So here's the deal, we have 50 spots that are first come, first >>> served. We have slots available before and after lunch both Tuesday and >>> Thursday. >>> >>> The google sheet here[1] should be set up so you have access to edit, >>> but if you can't for some reason just reply directly to me and I can add >>> your team to the list (I need team/sig name and contact email). >>> >>> I will be locking the google sheet on *Monday February 26th so I need >>> to know if your team is interested by then. * >>> >>> See you soon! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] >>> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Feb 15 22:07:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Feb 2018 22:07:36 +0000 Subject: [openstack-dev] [Election] Process Tweaks In-Reply-To: <9c8423d191167e4c9811f2740f6a3b2b@arcor.de> References: <9c8423d191167e4c9811f2740f6a3b2b@arcor.de> Message-ID: <20180215220735.jb2bd37ghcztgbtf@yuggoth.org> On 2018-02-15 22:24:31 +0100 (+0100), Frank Kloeker wrote: [...] > There is one task with validation openstackid, which validated the given > email address. Problem is here, translators using different email addresses > for Zanata and it's not possible to validate the user with his name. > Difficult. [...] As long as the address you get from Zanata appears in at least one of the E-mail address fields of the contributor's foundation individual member profile, then the foundation member lookup API should be able to locate the correct record for validation. In that regard, it shouldn't be any more of a problem than it is for code contributors (where at least one of the addresses for their Gerrit account needs to appear in at least one of the E-mail address fields for their member profile). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Feb 15 22:15:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Feb 2018 16:15:15 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: <20180215221514.GA32032@sm-xps> On Thu, Feb 15, 2018 at 09:56:25PM +0000, Kendall Nelson wrote: > Updates! > > So, we have gotten permission to do photos down on the pitch at the stadium > which is awesome! > > The only issue is that we need to condense into a more dense blocks > (Tuesday afternoon or Thursday morning) so looking at the schedule we have > to move some teams. If the following teams could move their times so that > we can make this happen: > > - QA > - SIG K8s > - Cyborg > - Neutron > - Octavia > - Requirements > - Release Mgmt > - OpenStack Ansible > - Cinder > > I'm really sorry to make you guys move, but since we need to pay for an > escort (with a 4 hour minimum) and don't want to conflict with lunch, we > need to shift. > Apparently my other email address wasn't subscribed to the ML so resending. Apologies to anyone that gets this twice... That sounds fun, but should we really do this? At past PTG's, it was a disruption for teams to put things on hold to quick run down a couple floors to take a photo. I think it is worth the brief disruption - something like this is definitely worthy of taking the time to have a photo of the team. But in those cases it was quick and fairly easy to context switch back to what had been the topic before the break. Now, if we are going to need to leave the building, wait for the photo, then walk back to the in to the venue, that seems like it's going to be a longer disruption that would be a bigger context switch and require more time to remember where things were before the break. I'm fine doing it either way, but I have some concerns about how big of a disruption this could now be. Sean From prometheanfire at gentoo.org Thu Feb 15 22:25:32 2018 From: prometheanfire at gentoo.org (prometheanfire at gentoo.org) Date: Thu, 15 Feb 2018 16:25:32 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: <20180215222532.27kopjb7ovb2dgvt@gentoo.org> On 18-02-15 21:56:25, Kendall Nelson wrote: > Updates! > > So, we have gotten permission to do photos down on the pitch at the stadium > which is awesome! > > The only issue is that we need to condense into a more dense blocks > (Tuesday afternoon or Thursday morning) so looking at the schedule we have > to move some teams. If the following teams could move their times so that > we can make this happen: > > - QA > - SIG K8s > - Cyborg > - Neutron > - Octavia > - Requirements > - Release Mgmt > - OpenStack Ansible > - Cinder > > I'm really sorry to make you guys move, but since we need to pay for an > escort (with a 4 hour minimum) and don't want to conflict with lunch, we > need to shift. > > We will have your team meet at registration at your selected time. Because > we get to go on the pitch and this requires an escort, you NEED TO BE AT > REG ON TIME OR EARLY. If you aren't, you will miss your chance to be in the > photo. > > I will send out a reminder on Monday of PTG week. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > > > This link might work better for everyone: > > > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > -Kendall (diablo_rojo) > > > > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > > wrote: > > > >> Hello PTLs and SIG Chairs! > >> > >> So here's the deal, we have 50 spots that are first come, first > >> served. We have slots available before and after lunch both Tuesday and > >> Thursday. > >> > >> The google sheet here[1] should be set up so you have access to edit, but > >> if you can't for some reason just reply directly to me and I can add your > >> team to the list (I need team/sig name and contact email). > >> > >> I will be locking the google sheet on *Monday February 26th so I need to > >> know if your team is interested by then. * > >> > >> See you soon! > >> > >> - Kendall Nelson (diablo_rojo) > >> > >> [1] > >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > >> Tuesday 2:40-2:50 for requirements can work -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From anne at openstack.org Thu Feb 15 23:09:56 2018 From: anne at openstack.org (Anne Bertucio) Date: Thu, 15 Feb 2018 15:09:56 -0800 Subject: [openstack-dev] [release] Collecting Queens demos Message-ID: <652102B1-1F30-4D78-A3A1-D7227D8F9829@openstack.org> Hi all, We’re getting the Queens Release communications ready, and I’ve seen a handful of video demos and tutorials of new Queens features. We’d like to compile a list of these to share with the marketing community. If you have a demo, would you please send a link my way so we can make sure to include it? If you don’t have a demo and have the time, I’d encourage you to make one of a feature you’re really excited about! We’ve heard really positive feedback about what’s already out there; people love them! Cheers, Anne Bertucio OpenStack Foundation anne at openstack.org | 206-992-7961 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Thu Feb 15 23:31:02 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 15 Feb 2018 23:31:02 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native Message-ID: Dear all, this is the first or a series of ~regular updates on the migration of Tempest / Grenade jobs to Zuul v3 native. The QA team together with the infra team are working on providing the OpenStack community with a set of base Tempest / Grenade jobs that can be used as a basis to write new CI jobs / migrate existing legacy ones with a minimal effort and very little or no Ansible knowledge as a precondition. The effort is tracked in an etherpad [0]; I'm trying to keep the etherpad up to date but it may not always be a source of truth. Useful jobs available so far: - devstack-tempest [0] is a simple tempest/devstack job that runs keystone glance nova cinder neutron swift and tempest *smoke* filter - tempest-full [1] is similar but runs a full test run - it replaces the legacy tempest-dsvm-neutron-full from the integrated gate - tempest-full-py3 [2] runs a full test run on python3 - it replaces the legacy tempest-dsvm-py35 Both tempest-full and tempest-full-py3 are part of integrated-gate templates, starting from stable/queens on. The other stable branches still run the legacy jobs, since devstack ansible changes have not been backported (yet). If we do backport it will be up to pike maximum. Those jobs work in single node mode only at the moment. Enabling multinode via job configuration only require a new Zuul feature [4][5] that should be available soon; the new feature allows defining host/group variables in the job definition, which means setting variables which are specific to one host or a group of hosts. Multinode DVR and Ironic jobs will require migration of the ovs-* roles form devstack-gate to devstack as well. Grenade jobs (single and multinode) are still legacy, even if the *legacy* word has been removed from the name. They are currently temporarily hosted in the neutron repository. They are going to be implemented as Zuul v3 native in the grenade repository. Roles are documented, and a couple of migration tips for DEVSTACK_GATE flags is available in the etherpad [0]; more comprehensive examples / docs will be available as soon as possible. Please let me know if you find this update useful and / or if you would like to see different information in it. I will send further updates as soon as significant changes / new features become available. Andrea Frittoli (andreaf) [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 [4] https://etherpad.openstack.org/p/zuulv3-group-variables [5] https://review.openstack.org/#/c/544562/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Feb 15 23:49:14 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 15 Feb 2018 17:49:14 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: <4b77e0af-4aec-1bdb-980a-d565273275e7@gmail.com> All, I have moved Cinder's time to the slot before lunch on Thursday. We can just break early for lunch so it will not be a huge disruption. Jay On 2/15/2018 3:56 PM, Kendall Nelson wrote: > Updates! > > So, we have gotten permission to do photos down on the pitch at the > stadium which is awesome! > > The only issue is that we need to condense into a more dense blocks > (Tuesday afternoon or Thursday morning) so looking at the schedule we > have to move some teams. If the following teams could move their times > so that we can make this happen: > > * QA > * SIG K8s > * Cyborg > * Neutron > * Octavia > * Requirements > * Release Mgmt > * OpenStack Ansible > * Cinder > > I'm really sorry to make you guys move, but since we need to pay for > an escort (with a 4 hour minimum) and don't want to conflict with > lunch, we need to shift. > > We will have your team meet at registration at your selected time. > Because we get to go on the pitch and this requires an escort, you > NEED TO BE AT REG ON TIME OR EARLY. If you aren't, you will miss your > chance to be in the photo. > > I will send out a reminder on Monday of PTG week. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > > This link might work better for everyone: > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > -Kendall (diablo_rojo) > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > > wrote: > > Hello PTLs and SIG Chairs! > > So here's the deal, we have 50 spots that are first come, > first served. We have slots available before and after lunch > both Tuesday and Thursday. > > The google sheet here[1] should be set up so you have access > to edit, but if you can't for some reason just reply directly > to me and I can add your team to the list (I need team/sig > name and contact email). > > I will be locking the google sheet on *Monday February 26th so > I need to know if your team is interested by then. * > > See you soon! > > - Kendall Nelson (diablo_rojo) > > [1] > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Feb 16 00:04:25 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 15 Feb 2018 18:04:25 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Kendall, I updated the Neutron team slot to 12:20 - 12:30 on Thursday? Hope that is ok Thanks On Thu, Feb 15, 2018 at 3:56 PM, Kendall Nelson wrote: > Updates! > > So, we have gotten permission to do photos down on the pitch at the > stadium which is awesome! > > The only issue is that we need to condense into a more dense blocks > (Tuesday afternoon or Thursday morning) so looking at the schedule we have > to move some teams. If the following teams could move their times so that > we can make this happen: > > - QA > - SIG K8s > - Cyborg > - Neutron > - Octavia > - Requirements > - Release Mgmt > - OpenStack Ansible > - Cinder > > I'm really sorry to make you guys move, but since we need to pay for an > escort (with a 4 hour minimum) and don't want to conflict with lunch, we > need to shift. > > We will have your team meet at registration at your selected time. Because > we get to go on the pitch and this requires an escort, you NEED TO BE AT > REG ON TIME OR EARLY. If you aren't, you will miss your chance to be in the > photo. > > I will send out a reminder on Monday of PTG week. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > >> This link might work better for everyone: >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoT >> ypX66eNURsopQY/edit?usp=sharing >> >> -Kendall (diablo_rojo) >> >> >> On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson >> wrote: >> >>> Hello PTLs and SIG Chairs! >>> >>> So here's the deal, we have 50 spots that are first come, first >>> served. We have slots available before and after lunch both Tuesday and >>> Thursday. >>> >>> The google sheet here[1] should be set up so you have access to edit, >>> but if you can't for some reason just reply directly to me and I can add >>> your team to the list (I need team/sig name and contact email). >>> >>> I will be locking the google sheet on *Monday February 26th so I need >>> to know if your team is interested by then. * >>> >>> See you soon! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] https://docs.google.com/spreadsheets/d/ >>> 1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>> >>> >>> >>> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dinesh.Bhor at nttdata.com Fri Feb 16 00:09:36 2018 From: Dinesh.Bhor at nttdata.com (Bhor, Dinesh) Date: Fri, 16 Feb 2018 00:09:36 +0000 Subject: [openstack-dev] [masakari] [notification api] How to clean up or purging of records In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E9624F65F@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9624F65F@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Kwan Louie, I think you are looking for this: https://review.openstack.org/#/c/487430/ Thank you, Dinesh Bhor ________________________________________ From: Kwan, Louie Sent: 16 February 2018 02:46:14 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] [notification api] How to clean up or purging of records Hi All, Just wondering, how can we clean up the masakari notification list or purging all old records in the DB? openstack notification list returns too many old records During semi-auto testing, I created a long list of history of records and would like to clean it up and avoid unnecessary actions. Any short term solution is what I am looking for and/or ideas how to extend the CLI is also welcomed so that some of us can extend it later. Thanks, Louie ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From kennelson11 at gmail.com Fri Feb 16 00:17:46 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 16 Feb 2018 00:17:46 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: I totally understand the concern. I don't know if I have stressed it before, but team photos are 100% optional. If its going to be too much of a distraction for anyone's team, they definitely don't need to sign up. The plan is to meet at reg, walk to the field entrance where the escort will take us onto the pitch, take the photo and go back. It won't be like this next time either, we just wanted to try to take advantage of what the location has to offer. -Kendall (diablo_rojo) On Thu, Feb 15, 2018 at 2:11 PM Sean McGinnis wrote: > On Thu, Feb 15, 2018 at 3:56 PM, Kendall Nelson > wrote: > >> Updates! >> >> So, we have gotten permission to do photos down on the pitch at the >> stadium which is awesome! >> >> The only issue is that we need to condense into a more dense blocks >> (Tuesday afternoon or Thursday morning) so looking at the schedule we have >> to move some teams. If the following teams could move their times so that >> we can make this happen: >> >> - QA >> - SIG K8s >> - Cyborg >> - Neutron >> - Octavia >> - Requirements >> - Release Mgmt >> - OpenStack Ansible >> - Cinder >> >> I'm really sorry to make you guys move, but since we need to pay for an >> escort (with a 4 hour minimum) and don't want to conflict with lunch, we >> need to shift. >> >> > That sounds fun, but should we really do this? > > At past PTG's, it was a disruption for teams to put things on hold to > quick run down a couple floors to take a photo. I think it is worth the > brief disruption - something like this is definitely worthy of taking the > time to have a photo of the team. > > But in those cases it was quick and fairly easy to context switch back to > what had been the topic before the break. Now, if we are going to need to > leave the building, wait for the photo, then walk back to the in to the > venue, that seems like it's going to be a longer disruption that would be a > bigger context switch and require more time to remember where things were > before the break. > > I'm fine doing it either way, but I have some concerns about how big of a > disruption this could now be. > > Sean > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Feb 16 00:44:37 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 16 Feb 2018 11:44:37 +1100 Subject: [openstack-dev] [QA][stable] Py3 integration jobs on stable In-Reply-To: References: Message-ID: <20180216004437.GW23143@thor.bakeyournoodle.com> On Thu, Feb 15, 2018 at 05:28:29PM +0000, Andrea Frittoli wrote: > Dear all, > > since it's now RC1 time, C1 we're setting up CI jobs for stable branches > and periodic-stable jobs for stable/queens. > > In the past, we used to run the py27 based tempest-full integration job > (legacy-tempest-dsvm-neutron-full). > With all the effort that went into py3 support, I think it's time to start > running the py3 integration job as well against stable branches. > > The integrated-gate-py35 [1] already includes stable/queens. > I proposed a patch for the periodic-stable pipeline [2]. > > Please let me know if you have any concern. Sounds good to me. Thanks! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Fri Feb 16 01:11:19 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Feb 2018 17:11:19 -0800 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: (Andrea Frittoli's message of "Thu, 15 Feb 2018 23:31:02 +0000") References: Message-ID: <87inax3fig.fsf@meyer.lemoncheese.net> Andrea Frittoli writes: > Dear all, > > this is the first or a series of ~regular updates on the migration of > Tempest / Grenade jobs to Zuul v3 native. > > The QA team together with the infra team are working on providing the > OpenStack community with a set of base Tempest / Grenade jobs that can be > used as a basis to write new CI jobs / migrate existing legacy ones with a > minimal effort and very little or no Ansible knowledge as a precondition. > > The effort is tracked in an etherpad [0]; I'm trying to keep the > etherpad up to date but it may not always be a source of truth. Thanks! One other issue we noticed when using the new job is related to devstack plugin ordering. We're trying to design an interface to the job that makes it easy to take the standard devstack and/or tempest job and add in a plugin for a project. This should greatly reduce the boilerplate needed for new devstack jobs compared to Zuul v2. However, our interface for enabling plugins in Zuul is not ordered, but sometimes ordering is important. To address this, we've added a feature to devstack plugins which allow them to express a dependency on other plugins. Nothing else but Zuul uses this right now, though we expand support for this in devstack in the future. If you maintain a devstack plugin which depends on another devstack plugin, you can go ahead and indicate that with "plugin_requires" in the settings file. See [1] for more details. We also need to land a change to the role that writes the devstack config in order to use this new feature; it's ready for review in [2]. -Jim [1] https://docs.openstack.org/devstack/latest/plugins.html#plugin-interface [2] https://review.openstack.org/522054 From tony at bakeyournoodle.com Fri Feb 16 01:36:38 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 16 Feb 2018 12:36:38 +1100 Subject: [openstack-dev] [requirements][trove][tatu][barbican][compass][daisycloud][freezer][fuel][nova][openstack-ansible][pyghmi][solum] Migration from pycrypto In-Reply-To: <20180214195929.muysdpwt77y3lln5@gentoo.org> References: <20180214160947.dowuweoigacnfztt@gentoo.org> <20180214195552.GA18614@sm-xps> <20180214195929.muysdpwt77y3lln5@gentoo.org> Message-ID: <20180216013637.GX23143@thor.bakeyournoodle.com> On Wed, Feb 14, 2018 at 01:59:29PM -0600, Matthew Thode wrote: > On 18-02-14 13:55:53, Sean McGinnis wrote: > > On Wed, Feb 14, 2018 at 10:09:47AM -0600, Matthew Thode wrote: > > > Development has stalled, (since 2014). It's been forked but now would > > > be a good time to move to a more actively maintained crypto library like > > > cryptography. > > > > > > Requirements wishes to drop pycrypto. Let me know if there's anything > > > we can do to facilitate this. > > > > > > -- > > > Matthew Thode (prometheanfire) > > > > We did have a discussion on the ML, and I think a little at one of the PTGs, > > about the path forward for this. IIRC, there was one other potential supported > > package that was considered for an option, but we settled on cryptography as > > the recommended path forward to get off of pycrypto. I think it had to do with > > ease of being able to just drop in the new package with minimal affected code. > > > > Yep, I remember it, I'm not mentioning it because I'd like to focus on > moving to cryptography rather than move to the fork. Seems like a good PTG ad-hoc session. But looking at the dump below I don't actually think that we have that much work to do to switch. $ get-all-requirements.py --all --pkgs pycrypto Package : pycrypto [pycrypto>=2.6] (used by 11 projects) Included in : 4 projects openstack/barbican [cycle-with-milestones] openstack/freezer [cycle-with-milestones] openstack/solum [cycle-with-intermediary] openstack/trove [cycle-with-milestones] Also affects : 7 projects openstack-dev/heat-cfnclient [None] openstack/compass-core [None] openstack/nova-powervm [None] openstack/openstack-ansible [cycle-trailing] openstack/pyghmi [None] openstack/rpm-packaging [None] openstack/tatu [None] $ bash ./check_more_pycrypto.sh openstack/barbican openstack/freezer openstack/solum openstack/trove openstack-dev/heat-cfnclient openstack/compass-core openstack/nova-powervm openstack/openstack-ansible openstack/pyghmi openstack/rpm-packaging openstack/tatu openstack/barbican:origin/master:barbican/tests/tasks/test_certificate_resources.py:555: def test_should_return_for_pycrypto_stored_key_with_passphrase(self): openstack/barbican:origin/master:barbican/tests/tasks/test_certificate_resources.py:597: def test_should_return_for_pycrypto_stored_key_without_passphrase(self): openstack/barbican:origin/master:barbican/tests/tasks/test_certificate_resources.py:632: def test_should_raise_for_pycrypto_stored_key_no_container(self): openstack/barbican:origin/master:barbican/tests/tasks/test_certificate_resources.py:666: def test_should_raise_for_pycrypto_stored_key_no_private_key(self): openstack/barbican:origin/master:requirements.txt:25:pycrypto>=2.6 # Public Domain openstack/barbican:origin/master:barbican/plugin/dogtag.py:22:from Crypto.PublicKey import RSA # nosec openstack/barbican:origin/master:barbican/plugin/dogtag.py:23:from Crypto.Util import asn1 # nosec openstack/barbican:origin/master:barbican/tests/plugin/test_dogtag.py:21:from Crypto.PublicKey import RSA # nosec openstack/freezer:origin/master:README.rst:127:- pycrypto openstack/freezer:origin/master:README.rst:590:restore. When a key is provided, it uses OpenSSL or pycrypto module (OpenSSL compatible) openstack/freezer:origin/master:requirements.txt:21:pycrypto>=2.6 # Public Domain openstack/freezer:origin/master:freezer/utils/crypt.py:17:from Crypto.Cipher import AES openstack/freezer:origin/master:freezer/utils/crypt.py:18:from Crypto import Random openstack/solum:origin/master:devstack/devstack-provenance:253:pip|pycrypto|2.6.1 openstack/solum:origin/master:requirements.txt:24:pycrypto>=2.6 # Public Domain openstack/solum:origin/master:solum/api/handlers/plan_handler.py:20:from Crypto.PublicKey import RSA openstack/solum:origin/master:solum/common/utils.py:14:from Crypto.Cipher import AES openstack/trove:origin/master:integration/scripts/files/requirements/fedora-requirements.txt:30:pycrypto>=2.6 # Public Domain openstack/trove:origin/master:integration/scripts/files/requirements/ubuntu-requirements.txt:29:pycrypto>=2.6 # Public Domain openstack/trove:origin/master:requirements.txt:47:pycrypto>=2.6 # Public Domain openstack/trove:origin/master:trove/common/crypto_utils.py:19:from Crypto.Cipher import AES openstack/trove:origin/master:trove/common/crypto_utils.py:20:from Crypto import Random openstack/trove:origin/master:trove/tests/unittests/common/test_crypto_utils.py:17:from Crypto import Random openstack/trove:origin/master:trove/tests/unittests/common/test_stream_codecs.py:17:from Crypto import Random openstack/compass-core:origin/master:test-requirements.txt:9:pycrypto openstack/nova-powervm:origin/master:test-requirements.txt:8:pycrypto>=2.6 # Public Domain openstack/pyghmi:origin/master:requirements.txt:1:pycrypto>=2.6 openstack/pyghmi:origin/master:pyghmi/ipmi/private/session.py:30:from Crypto.Cipher import AES openstack/rpm-packaging:origin/master:openstack/freezer/freezer.spec.j2:42:BuildRequires: {{ py2pkg('pycrypto') }} openstack/rpm-packaging:origin/master:openstack/freezer/freezer.spec.j2:87:Requires: {{ py2pkg('pycrypto') }} openstack/rpm-packaging:origin/master:openstack/keystoneauth1/keystoneauth1.spec.j2:24:BuildRequires: {{ py2pkg('pycrypto', py_versions=['py2', 'py3']) }} openstack/rpm-packaging:origin/master:openstack/keystonemiddleware/keystonemiddleware.spec.j2:27:BuildRequires: {{ py2pkg('pycrypto') }} openstack/rpm-packaging:origin/master:openstack/pyghmi/pyghmi.spec.j2:17:BuildRequires: {{ py2pkg('pycrypto', py_versions=['py2', 'py3']) }} openstack/rpm-packaging:origin/master:openstack/pyghmi/pyghmi.spec.j2:18:Requires: {{ py2pkg('pycrypto') }} openstack/rpm-packaging:origin/master:openstack/python-troveclient/python-troveclient.spec.j2:20:BuildRequires: {{ py2pkg('pycrypto') }} openstack/rpm-packaging:origin/master:requirements.txt:192:pycrypto>=2.6 # Public Domain openstack/rpm-packaging:origin/master:requirements.txt:228:# NOTE(dims): pysaml 4.0.3 uses pycryptodome instead of pycrypto, for mitaka openstack/rpm-packaging:origin/master:requirements.txt:229:# we cannot switch to pycryptodome as many projects are likely to break. So openstack/rpm-packaging:origin/master:requirements.txt:231:# dependencies like paramiko switch to pycryptodome, we should revisit this openstack/rpm-packaging:origin/master:requirements.txt:232:# and fully switch over to pycryptodome and stop using pycrypto openstack/tatu:origin/master:requirements.txt:7:pycrypto>=2.6.1 openstack/tatu:origin/master:test-requirements.txt:7:pycrypto>=2.6.1 openstack/tatu:origin/master:scripts/get-user-cert:20:from Crypto.PublicKey import RSA openstack/tatu:origin/master:scripts/revoke-user-cert:20:from Crypto.PublicKey import RSA openstack/tatu:origin/master:tatu/api/models.py:17:from Crypto.PublicKey import RSA openstack/tatu:origin/master:tatu/db/models.py:13:from Crypto.PublicKey import RSA openstack/tatu:origin/master:tatu/ftests/test_api.py:13:from Crypto.PublicKey import RSA openstack/tatu:origin/master:tatu/tests/test_app.py:13:from Crypto.PublicKey import RSA Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ed at leafe.com Fri Feb 16 03:43:21 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Feb 2018 21:43:21 -0600 Subject: [openstack-dev] [Election] PTL Election Results & Conclusion In-Reply-To: <1B02A3C7-3E09-44CE-A694-FE046C5F7196@cisco.com> References: <997afcdf-abb4-0599-3b96-dbca1f55534a@openstack.org> <1B02A3C7-3E09-44CE-A694-FE046C5F7196@cisco.com> Message-ID: On Feb 15, 2018, at 9:43 AM, Steven Dake (stdake) wrote: > > The election process I'm certain is very difficult to execute, and as a community member, I'd like to thank the election officials for their work. Having done it once, I can agree! Thanks for making things run so smoothly. -- Ed Leafe From ksnhr.tech at gmail.com Fri Feb 16 04:03:43 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Fri, 16 Feb 2018 13:03:43 +0900 Subject: [openstack-dev] [release] Collecting Queens demos In-Reply-To: <652102B1-1F30-4D78-A3A1-D7227D8F9829@openstack.org> References: <652102B1-1F30-4D78-A3A1-D7227D8F9829@openstack.org> Message-ID: Hi Anne, I'm wondering if I can send a demo video for heat-dashboard which is a new feature in Queens. Is there any format of the video ? Regards, Kaz 2018-02-16 8:09 GMT+09:00 Anne Bertucio : > Hi all, > > We’re getting the Queens Release communications ready, and I’ve seen a > handful of video demos and tutorials of new Queens features. We’d like to > compile a list of these to share with the marketing community. If you have a > demo, would you please send a link my way so we can make sure to include it? > > If you don’t have a demo and have the time, I’d encourage you to make one of > a feature you’re really excited about! We’ve heard really positive feedback > about what’s already out there; people love them! > > > Cheers, > Anne Bertucio > OpenStack Foundation > anne at openstack.org | 206-992-7961 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From no-reply at openstack.org Fri Feb 16 05:47:46 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 16 Feb 2018 05:47:46 -0000 Subject: [openstack-dev] [glance] glance 16.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for glance for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/glance/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/glance/log/?h=stable/queens Release notes for glance can be found at: http://docs.openstack.org/releasenotes/glance/ From yamamoto at midokura.com Fri Feb 16 05:51:58 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 16 Feb 2018 14:51:58 +0900 Subject: [openstack-dev] [tap-as-a-service] queens Message-ID: hi, 1. i'm going to create a release and stable branch for tap-as-a-service/queens. 2. to clean up the queue before a release, i approved a bunch of patches today without waiting for two +2s. given the current review bandwidth, i guess we should relax the policy. how do you think? 3. what to do for tap-as-a-service-dashboard? kaz? From sean.mcginnis at gmx.com Fri Feb 16 05:59:35 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 15 Feb 2018 23:59:35 -0600 Subject: [openstack-dev] [release] Release countdown for week R-1, February 17 - 23 Message-ID: <20180216055935.GA17570@cooler> Welcome to one of the last countdown emails for Queens! Development Focus ----------------- Teams should be working on release critical bugs in preparation of the final release. Teams attending the PTG should also be preparing for those discussions and capturing information in the etherpads: https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads General Information ------------------- Thursday, February 22 is the deadline for final Queens release candidates. We will then enter a quiet period until we tag the final release on February 28. Actions --------- Watch for any translation patches coming through and merge them quickly. If your project has a stable/queens branch created, please make sure those patches are also merged there. Liaisons for projects with independent deliverables should import the release history by preparing patches to openstack/releases. Projects following the cycle-trailing model should be getting ready for the cycle-trailing RC deadline coming up on March 1. Please drop by #openstack-release with any questions or concerns about the upcoming release. Upcoming Deadlines & Dates -------------------------- Queens Final Release Candidate deadline: February 22 Rocky PTG in Dublin: Week of February 26, 2018 Queens cycle-trailing RC deadline: March 1 -- Sean McGinnis (smcginnis) From no-reply at openstack.org Fri Feb 16 06:12:42 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 16 Feb 2018 06:12:42 -0000 Subject: [openstack-dev] [nova] nova 17.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for nova for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/nova/log/?h=stable/queens Release notes for nova can be found at: http://docs.openstack.org/releasenotes/nova/ From thierry at openstack.org Fri Feb 16 08:31:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 16 Feb 2018 09:31:03 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, February 16th Message-ID: <402c15a5-5fdc-9f44-fa3d-2306fa4bbe59@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Rocky goals: toggle debug at runtime[1], and remove mox[2] * Update goal process to use storyboard for tracking[3] * Dedicate Queens release to Shawn Pearce [4] * Team diversity tags update [5] * Update Rocky PTLs [6] * Goal updates: telemetry, congress * New repo: devstack-plugin-container * Removed repo: trove-integration [1] https://review.openstack.org/#/c/534605/ [2] https://review.openstack.org/#/c/532361/ [3] https://review.openstack.org/#/c/534443/ [4] https://review.openstack.org/#/c/541313/ [5] https://review.openstack.org/#/c/543566/ [6] https://review.openstack.org/#/c/544753/ Busy week for the approval section. We finally selected our two goals for Rocky, as well as updated the process to use StoryBoard for tracking progress. Please see the Rocky goals page at: https://governance.openstack.org/tc/goals/rocky/index.html The Technical Committee also approved a resolution to dedicate the Queens release to Shawn Pearce, creator of Gerrit and prominent git contributor. Please see: https://governance.openstack.org/tc/resolutions/20180206-dedicate-queens-to-shawn-pearce.html == PTG preparation == The Dublin PTG will start in 10 days ! You can see the schedule at: https://www.openstack.org/ptg#tab_schedule We'll also have the following post-lunch presentations: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127316.html If you wonder how the PTGbot will be used to track what's happening and book reservable rooms, please see those two threads: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127413.html http://lists.openstack.org/pipermail/openstack-dev/2018-February/127414.html == Under discussion == Paul Belanger proposed dates and a theme for naming the S release. Go Spandau ! https://review.openstack.org/#/c/545010/ Jim Blair proposed a resolution about CI for external projects. Please see: https://review.openstack.org/#/c/545065/ A new project team was proposed to regroup people working on PowerVM support in OpenStack. It is similar in many ways to the WinStackers team (working on Hyper-V / Windows support). Please comment on the review at: https://review.openstack.org/#/c/540165/ The discussion started by Graham Hayes to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs is still going on, now on an active mailing-list thread. Please chime in to inform the TC choice: https://review.openstack.org/521602 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html == TC member actions for the coming week(s) == PTG and release preparation should take most of TC members time. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to be around the discussions we need to have at the PTG. Cheers, -- Thierry Carrez (ttx) From bdobreli at redhat.com Fri Feb 16 09:24:01 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 16 Feb 2018 10:24:01 +0100 Subject: [openstack-dev] [TripleO][CI] Validating HA on upstream In-Reply-To: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> Message-ID: <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> On 2/15/18 8:22 PM, Raoul Scarazzini wrote: > TL;DR: we would like to change the way HA is tested upstream to avoid > being hitten by evitable bugs that the CI process should discover. > > Long version: > > Today HA testing in upstream consist only in verifying that a three > controllers setup comes up correctly and can spawn an instance. That's > something, but it’s far from being enough since we continuously see "day > two" bugs. > We started covering this more than a year ago in internal CI and today > also on rdocloud using a project named tripleo-quickstart-utils [1]. > Apart from his name, the project is not limited to tripleo-quickstart, > it covers three principal roles: > > 1 - stonith-config: a playbook that can be used to automate the creation > of fencing devices in the overcloud; > 2 - instance-ha: a playbook that automates the seventeen manual steps > needed to configure instance HA in the overcloud, test them via rally > and verify that instance HA works; > 3 - validate-ha: a playbook that runs a series of disruptive actions in > the overcloud and verifies it always behaves correctly by deploying a > heat-template that involves all the overcloud components; > > To make this usable upstream, we need to understand where to put this > code. Here some choices: > > 1 - tripleo-validations: the most logical place to put this, at least > looking at the name, would be tripleo-validations. I've talked with some > of the folks working on it, and it came out that the meaning of > tripleo-validations project is not doing disruptive tests. Integrating > this stuff would be out of scope. > > 2 - tripleo-quickstart-extras: apart from the fact that this is not > something meant just for quickstart (the project supports infrared and > "plain" environments as well) even if we initially started there, in the > end, it came out that nobody was looking at the patches since nobody was > able to verify them. The result was a series of reviews stuck forever. > So moving back to extras would be a step backward. > > 3 - Dedicated project (tripleo-ha-utils or just tripleo-utils): like for > tripleo-upgrades or tripleo-validations it would be perfect having all > this grouped and usable as a standalone thing. Any integration is > possible inside the playbook for whatever kind of test. Today we're +1 this looks like a perfect fit. Would it be possible to install that tripleo-ha-utils/tripleo-quickstart-utils with ansible-galaxy, alongside the quickstart, then apply destructive-testing playbooks with either the quickstart's static inventory [0] (from your admin/control node) or maybe via dynamic inventory [1] (from undercloud managing the overcloud under test via config-download and/or external ansible deployment mechanisms)? [0] https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/roles/tripleo-inventory [1] https://git.openstack.org/cgit/openstack/tripleo-validations/tree/scripts/tripleo-ansible-inventory > using the bash framework to interact with the cluster, rally to test > instance-ha and Ansible itself to simulate full power outage scenarios. > > There's been a lot of talk about this during the last PTG [2], and > unfortunately, I'll not be part of the next one, but I would like to see > things moving on this side. > Everything I wrote is of course up to discussion, that's precisely the > meaning of this mail. > > Thanks to all who'll give advice, suggestions, and thoughts about all > this stuff. > > [1] https://github.com/redhat-openstack/tripleo-quickstart-utils > [2] https://etherpad.openstack.org/p/qa-queens-ptg-destructive-testing > -- Best regards, Bogdan Dobrelya, Irc #bogdando From arxcruz at redhat.com Fri Feb 16 09:55:13 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 16 Feb 2018 10:55:13 +0100 Subject: [openstack-dev] [tripleo] TripleO CI end of sprint status Message-ID: *Hello,On February 14 we came the end of sprint using our new team structure, and here’s the highlights.Sprint Review:On this sprint, was the first one where the team worked in collaboration with another team to have TripleO Upgrade jobs running on RDO cloud on tripleo-quickstart, tripleo-quickstart-extras and tripleo-upgrade projects.One can see the results of the sprint via https://tinyurl.com/y8h8xmo8 Ruck and RoverWhat is Ruck and RoverOne person in our team is designated Ruck and another Rover, one is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. The other person, is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1]List of bugs that Ruck and Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1749335 - https://bugs.launchpad.net/tripleo/+bug/1749186 - https://bugs.launchpad.net/tripleo/+bug/1749105 - https://bugs.launchpad.net/tripleo/+bug/1748971 - https://bugs.launchpad.net/tripleo/+bug/1748934 - https://bugs.launchpad.net/tripleo/+bug/1748751 - https://bugs.launchpad.net/tripleo/+bug/1748315 - https://bugs.launchpad.net/tripleo/+bug/1748262 - https://bugs.launchpad.net/tripleo/+bug/1748199 - https://bugs.launchpad.net/tripleo/+bug/1748180 - https://bugs.launchpad.net/tripleo/+bug/1747986 - https://bugs.launchpad.net/tripleo/+bug/1747690 - https://bugs.launchpad.net/tripleo/+bug/1747623 - https://bugs.launchpad.net/tripleo/+bug/1747294 - https://bugs.launchpad.net/tripleo/+bug/1747089 - https://bugs.launchpad.net/tripleo/+bug/1747055 - https://bugs.launchpad.net/tripleo/+bug/1747043 - https://bugs.launchpad.net/tripleo/+bug/1746978 - https://bugs.launchpad.net/tripleo/+bug/1746857 - https://bugs.launchpad.net/tripleo/+bug/1746812 - https://bugs.launchpad.net/tripleo/+bug/1746737 - https://bugs.launchpad.net/tripleo/+bug/1746734 We also have our new Ruck and Rover for this week: - Ruck- Arx Cruz - arxcruz|ruck- Rover- Ronele Landy - rlandy|roverIf you have any questions and/or suggestions, please contact us[1] https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at crystone.com Fri Feb 16 10:43:27 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Fri, 16 Feb 2018 10:43:27 +0000 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> <20180215193845.paju37mbcvizmbpq@gentoo.org> <20180215195047.pxkfktdxkepihvoo@gentoo.org> Message-ID: <1b0d2ccbd7c94da1993672a7fce5bc73@mb01.staff.ognet.se> On 02/15/2018 08:52 PM, Matthew Thode wrote: > On 18-02-15 13:38:45, Matthew Thode wrote: >> On 18-02-15 09:31:19, Thomas Goirand wrote: >>> Hi, >>> >>> Since I'm getting some pressure from other DDs to actively remove Py2 >>> support from my packages, I'm very much considering switching all of the >>> Debian packages for Queens to using exclusively Py3. I would have like >>> to read some opinions about this. Is it a good time for such move? I >>> hope it is, because I'd like to maintain as few Python package with Py2 >>> support at the time of Debian Buster freeze. >>> >>> Also, doing Queens, I've noticed that os-xenapi is still full of py2 >>> only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: >>> >>> https://review.openstack.org/544809 >>> >> Gentoo has Openstack packaged for both python2.7 and python3.(5,6) for >> pike. Queens will be the same for us at least, but I haven't had >> problems with at least the core services running them all through >> python3.x. >> > Edit: Everything BUT swift... > Shimming in with more of an operators view; since a lot of services are deployed with wsgi in a webserver, mainly a lot of API services, having a coordinated approach of moving all those services over to py3 at the same time is appreciated. Otherwise a lot of the services cannot be co-located on the same machine and will probably break other projects, for example the Puppet modules. Same goes for testing all those services in a single node CI test. From zh.f at outlook.com Fri Feb 16 12:54:41 2018 From: zh.f at outlook.com (Zhang Fan) Date: Fri, 16 Feb 2018 12:54:41 +0000 Subject: [openstack-dev] [trove] PTG planning, weekly meeting for Trove In-Reply-To: References: <6e8813b1-c05b-e729-75dd-7c9863fd0730@catalyst.net.nz> , Message-ID: Thanks for the kind reminder, still enjoy my Chinese New Year holidays, wish you the best luck in the new year :) From Fan’s plastic iPhone 在 2018年2月16日,02:16,Manoj Kumar > 写道: I would encourage everyone who is interested in providing input into the Rocky planning cycle for Trove to put their ideas into the etherpad at: https://etherpad.openstack.org/p/trove-ptg-rocky There are a good number of topics posted already. We would welcome input from operators as well. We are planning to meet remotely using Skype. If you are interested in participating in the discussions do add your Skype ID as well. As we prepare for the PTG, there will be no weekly meeting next week. - Manoj __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Feb 16 13:34:44 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 16 Feb 2018 08:34:44 -0500 Subject: [openstack-dev] [glance] priorities for the coming week (16 Feb - 21 Feb) Message-ID: Hello Glancers, (1) Queens RC-2 was tagged Thursday (late Thursday night Pacific time, but Thursday nonetheless). A good place to concentrate testing effort is on the URI filtering feature of the web-download import method. Remember that if you build a devstack, set WSGI_MODE=mod_wsgi in your local.conf so that the import task will be processed. (2) We're aiming for RC-3 for Monday. There will be some documentation and releasenote update patches, and patches for the glance-manage tool. (And any image import bugs found during testing.) Keep an eye on the etherpad for stuff to review: https://etherpad.openstack.org/p/glance-queens-rc1-patches (yes, that's "rc1" in the url, I didn't want to waste a perfectly good etherpad) (3) Reminder: Erno will be making final decisions on what will happen for Glance at the PTG on Monday morning. So get your topic proposals and votes for the stuff you want to discuss (or have discussed, if unfortunately you won't be there) on the etherpad before 0900 UTC Monday: https://etherpad.openstack.org/p/glance-rocky-ptg-planning cheers, brian From cdent+os at anticdent.org Fri Feb 16 13:54:21 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 16 Feb 2018 13:54:21 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] resource providers update 18-07 Message-ID: Resource provider update 18-07. This will be the last one before the PTG and there won't be one during the PTG, so the next one will be 18-10 or later. Before I get to the meat of this week's report, I'd like to request some feedback from readers on how to improve the report. Over its lifetime it has grown and it has now reached the point that while it tries to give the impression of being complete, it never actually is, and is a fair chunk of work to get that way. So perhaps there is a way to make it a bit more focused and thus bit more actionable. If there are parts you can live without or parts you can't live without, please let me know. One idea I've had is to do some kind of automation to make it what amounts to a dashboard, but I'm not super inclined to do that because the human curation has been useful for me. If it's not useful for anyone else, however, then that's something to consider. If, at the PTG, we decide to start making incremental progress on extracting placement to its own thing, I'll probably add a section on this related to work on that. I've been doing a lot of spikes to see where some of the issues are and experiment with solutions. Those need feedback to decide if the direction has promise or creates problems. Okay, with that out of the way. # Most Important RC2 was cut last night. Bug triage and fixing is important. There's been a lot of interesting specs started recently. Part of this is the result of various parties moving their deployments forward (not just to queens) and real issues with placement (and friends) being exposed. See the specs section for some links to ones that are pending. A few have already merged but for sake of visibility: * Add placement-req-filter spec https://review.openstack.org/#/c/544585/ * Support member_of param for allocation candidates https://review.openstack.org/#/c/544694/ PTG planning screams along on etherpads, agenda and retrospective: * https://etherpad.openstack.org/p/nova-ptg-rocky * https://etherpad.openstack.org/p/nova-queens-retrospective # Bugs: * Placement related bugs without owners: https://goo.gl/TgiPXb * In progress placement bugs: https://goo.gl/vzGGDQ # Specs * Support traits in Glance https://review.openstack.org/#/c/541507/4 * Update ProviderTree https://review.openstack.org/#/c/540111/ * Support aggregate affinity filter/weighers https://review.openstack.org/#/c/529135/ (Note that this is not placement aggregates and is not a placement-oriented solution but is something many of the same people are into.) * Report CPU features to placement https://review.openstack.org/#/c/497733/ * Account for host agg allocation ratio in placement https://review.openstack.org/#/c/544683/ * mirror nova host aggregates to placement API https://review.openstack.org/#/c/545057/ * Network bandwidth resource provider https://review.openstack.org/#/c/502306/ # Main Themes We're between themes at the moment so I'll just put everything into other today: # Other * Nested resource providers https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers * Update references to OSC in old rp specs https://review.openstack.org/#/c/539038/ * [Placement] Invalid query parameter could lead to HTTP 500 https://review.openstack.org/#/c/539408/ * [placement] use simple FaultWrapper https://review.openstack.org/#/c/533752/ * WIP: Move resource provider objects https://review.openstack.org/#/c/540049/ * Do not normalize allocation ratios https://review.openstack.org/#/c/532924/ * Sending global request ids from nova to placement https://review.openstack.org/#/q/topic:bug/1734625 * Add functional test for two-cell scheduler behaviors https://review.openstack.org/#/c/452006/ (This is old and maybe out of date, but something we might like to resurrect) * Make API history doc consistent https://review.openstack.org/#/c/477478/ * WIP: General policy sample file for placement https://review.openstack.org/#/c/524425/ * Support relay RP for allocation candidates https://review.openstack.org/#/c/533437/ Bug fix for sharing with multiple providers * Convert driver supported capabilities to compute node provider traits https://review.openstack.org/#/c/538498/ * Update resources once in update available resources https://review.openstack.org/#/c/520024/ (This ought, when it works, to help address some redunancy concerns with nova making too many requests to placement) * Support aggregate affinity filters/weighers https://review.openstack.org/#/q/topic:bp/aggregate-affinity A rocky targeted improvement to affinity handling * Improved functional test coverage for placement https://review.openstack.org/#/q/topic:bp/placement-test-enhancement * Functional tests for traits api https://review.openstack.org/#/c/524094/ * WIP: SchedulerReportClient.set_aggregates_for_provider https://review.openstack.org/#/c/532995/ This is for rocky as it depends on changing the api for aggregates handling on the placement side to accept and provide a generation * Check for leaked allocations in post_test_hook https://review.openstack.org/#/c/538510/ # End Hi. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From rasca at redhat.com Fri Feb 16 13:59:40 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Fri, 16 Feb 2018 14:59:40 +0100 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> Message-ID: On 16/02/2018 10:24, Bogdan Dobrelya wrote: [...] > +1 this looks like a perfect fit. Would it be possible to install that > tripleo-ha-utils/tripleo-quickstart-utils with ansible-galaxy, alongside > the quickstart, then apply destructive-testing playbooks with either the > quickstart's static inventory [0] (from your admin/control node) or > maybe via dynamic inventory [1] (from undercloud managing the overcloud > under test via config-download and/or external ansible deployment > mechanisms)? > [0] > https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/roles/tripleo-inventory > [1] > https://git.openstack.org/cgit/openstack/tripleo-validations/tree/scripts/tripleo-ansible-inventory Hi Bogdan, thanks for your answer. On the inventory side of things these playbooks work on any kind of inventory, we're using it at the moment with both manual and quickstart generated environments, or even infrared ones. We're able to do it at the same time the environment gets deployed or in a second time like a day two action. What is not clear to me is the ansible-galaxy part you're mentioning, today we rely on the github.com/redhat-openstack git repo, so we clone it and then launch the playbooks via ansible-playbook command, how do you see ansible-galaxy into the picture? Thanks! -- Raoul Scarazzini rasca at redhat.com From doug at doughellmann.com Fri Feb 16 14:02:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 16 Feb 2018 09:02:17 -0500 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: <1518789681-sup-6176@lrrr.local> Excerpts from Andrea Frittoli's message of 2018-02-15 23:31:02 +0000: > Dear all, > > this is the first or a series of ~regular updates on the migration of > Tempest / Grenade jobs to Zuul v3 native. > > The QA team together with the infra team are working on providing the > OpenStack community with a set of base Tempest / Grenade jobs that can be > used as a basis to write new CI jobs / migrate existing legacy ones with a > minimal effort and very little or no Ansible knowledge as a precondition. > > The effort is tracked in an etherpad [0]; I'm trying to keep the > etherpad up to date but it may not always be a source of truth. > > Useful jobs available so far: > - devstack-tempest [0] is a simple tempest/devstack job that runs keystone > glance nova cinder neutron swift and tempest *smoke* filter > - tempest-full [1] is similar but runs a full test run - it replaces the > legacy tempest-dsvm-neutron-full from the integrated gate > - tempest-full-py3 [2] runs a full test run on python3 - it replaces the > legacy tempest-dsvm-py35 > > Both tempest-full and tempest-full-py3 are part of integrated-gate > templates, starting from stable/queens on. > The other stable branches still run the legacy jobs, since devstack ansible > changes have not been backported (yet). If we do backport it will be up to > pike maximum. > > Those jobs work in single node mode only at the moment. Enabling multinode > via job configuration only require a new Zuul feature [4][5] that should be > available soon; the new feature allows defining host/group variables in the > job definition, which means setting variables which are specific to one > host or a group of hosts. > Multinode DVR and Ironic jobs will require migration of the ovs-* roles > form devstack-gate to devstack as well. > > Grenade jobs (single and multinode) are still legacy, even if the *legacy* > word has been removed from the name. > They are currently temporarily hosted in the neutron repository. They are > going to be implemented as Zuul v3 native in the grenade repository. > > Roles are documented, and a couple of migration tips for DEVSTACK_GATE > flags is available in the etherpad [0]; more comprehensive examples / > docs will be available as soon as possible. > > Please let me know if you find this update useful and / or if you would > like to see different information in it. > I will send further updates as soon as significant changes / new features > become available. > > Andrea Frittoli (andreaf) > > [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs > [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 > [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 > [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 > [4] https://etherpad.openstack.org/p/zuulv3-group-variables > [5] https://review.openstack.org/#/c/544562/ Thanks for this post, Andrea. I know the QA & Infra teams have been doing a lot of work to complete the migration and improve our CI systems and I look forward to being able to track the work via future update emails. Doug From balazs.gibizer at ericsson.com Fri Feb 16 14:11:09 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 16 Feb 2018 15:11:09 +0100 Subject: [openstack-dev] [nova] signing up as a bug tag owner Message-ID: <1518790269.19368.2@smtp.office365.com> Hi, On the weekly meeting melwitt suggested [1] to have people signed up for certain bug tags. I've already been trying to follow the bugs tagged with the 'notifications' tag so I sign up for this tag. Cheers, gibi [1]http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-02-15-21.01.log.html#l-86 From bdobreli at redhat.com Fri Feb 16 14:16:14 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 16 Feb 2018 15:16:14 +0100 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> Message-ID: <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> On 2/16/18 2:59 PM, Raoul Scarazzini wrote: > On 16/02/2018 10:24, Bogdan Dobrelya wrote: > [...] >> +1 this looks like a perfect fit. Would it be possible to install that >> tripleo-ha-utils/tripleo-quickstart-utils with ansible-galaxy, alongside >> the quickstart, then apply destructive-testing playbooks with either the >> quickstart's static inventory [0] (from your admin/control node) or >> maybe via dynamic inventory [1] (from undercloud managing the overcloud >> under test via config-download and/or external ansible deployment >> mechanisms)? >> [0] >> https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/roles/tripleo-inventory >> [1] >> https://git.openstack.org/cgit/openstack/tripleo-validations/tree/scripts/tripleo-ansible-inventory > > Hi Bogdan, > thanks for your answer. On the inventory side of things these playbooks > work on any kind of inventory, we're using it at the moment with both > manual and quickstart generated environments, or even infrared ones. > We're able to do it at the same time the environment gets deployed or in > a second time like a day two action. > What is not clear to me is the ansible-galaxy part you're mentioning, > today we rely on the github.com/redhat-openstack git repo, so we clone > it and then launch the playbooks via ansible-playbook command, how do > you see ansible-galaxy into the picture? Git clone just works as well... Though, I was thinking of some minimal integration via *playbooks* (not roles) in quickstart/tripleo-validations and *external* roles. So the in-repo playbooks will be referencing those external destructive testing roles. While the roles are installed with galaxy, like: $ ansible-galaxy install git+https://$repo_name,master -p $external_roles_path or prolly adding the $repo_name and $release (master or a tag) into some galaxy-requirements.yaml file and install from it: $ ansible-galaxy install --force -r quickstart-extras/playbooks/external/galaxy-requirements.yaml -p $external_roles_path Then invoked for quickstart-extras/tripleo-validations like: $ ansible-playbook -i inventory quickstart-extras/playbooks/external/destructive-tests.yaml > > Thanks! > -- Best regards, Bogdan Dobrelya, Irc #bogdando From whayutin at redhat.com Fri Feb 16 14:41:57 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 16 Feb 2018 09:41:57 -0500 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> Message-ID: On Fri, Feb 16, 2018 at 9:16 AM, Bogdan Dobrelya wrote: > On 2/16/18 2:59 PM, Raoul Scarazzini wrote: > >> On 16/02/2018 10:24, Bogdan Dobrelya wrote: >> [...] >> >>> +1 this looks like a perfect fit. Would it be possible to install that >>> tripleo-ha-utils/tripleo-quickstart-utils with ansible-galaxy, alongside >>> the quickstart, then apply destructive-testing playbooks with either the >>> quickstart's static inventory [0] (from your admin/control node) or >>> maybe via dynamic inventory [1] (from undercloud managing the overcloud >>> under test via config-download and/or external ansible deployment >>> mechanisms)? >>> [0] >>> https://git.openstack.org/cgit/openstack/tripleo-quickstart/ >>> tree/roles/tripleo-inventory >>> [1] >>> https://git.openstack.org/cgit/openstack/tripleo-validations >>> /tree/scripts/tripleo-ansible-inventory >>> >> >> Hi Bogdan, >> thanks for your answer. On the inventory side of things these playbooks >> work on any kind of inventory, we're using it at the moment with both >> manual and quickstart generated environments, or even infrared ones. >> We're able to do it at the same time the environment gets deployed or in >> a second time like a day two action. >> What is not clear to me is the ansible-galaxy part you're mentioning, >> today we rely on the github.com/redhat-openstack git repo, so we clone >> it and then launch the playbooks via ansible-playbook command, how do >> you see ansible-galaxy into the picture? >> > > Git clone just works as well... Though, I was thinking of some minimal > integration via *playbooks* (not roles) in quickstart/tripleo-validations > and *external* roles. So the in-repo playbooks will be referencing those > external destructive testing roles. While the roles are installed with > galaxy, like: > > $ ansible-galaxy install git+https://$repo_name,master -p > $external_roles_path > > or prolly adding the $repo_name and $release (master or a tag) into some > galaxy-requirements.yaml file and install from it: > > $ ansible-galaxy install --force -r quickstart-extras/playbooks/ex > ternal/galaxy-requirements.yaml -p $external_roles_path > > Then invoked for quickstart-extras/tripleo-validations like: > > $ ansible-playbook -i inventory quickstart-extras/playbooks/ex > ternal/destructive-tests.yaml > > >> Thanks! >> >> Using galaxy is an option however we would need to make sure that galaxy is proxied across the upstream clouds. Another option would be to follow the current established pattern of adding it to the requirements file [1] Thanks Bogdan, Raoul! [1] https://github.com/openstack/tripleo-quickstart/blob/master/quickstart-extras-requirements.txt > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Fri Feb 16 14:42:10 2018 From: alee at redhat.com (Ade Lee) Date: Fri, 16 Feb 2018 09:42:10 -0500 Subject: [openstack-dev] [barbican] weekly meeting time In-Reply-To: <005101d3a55a$e6329270$b297b750$@gohighsec.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> Message-ID: <1518792130.19501.1.camel@redhat.com> Thanks Jiong, Preference noted. Anyone else want to make the meeting time switch? (Or prefer not to). Ade On Wed, 2018-02-14 at 14:13 +0800, Jiong Liu wrote: > Hi Ade, > > Thank you for proposing this change! > I'm in China, and the second time slot works better for me. > > Regards, > Jiong > > > Message: 35 > > Date: Tue, 13 Feb 2018 10:17:59 -0500 > > From: Ade Lee > > To: "OpenStack Development Mailing List (not for usage questions)" > > > > Subject: [openstack-dev] [barbican] weekly meeting time > > Message-ID: <1518535079.22990.9.camel at redhat.com> > > Content-Type: text/plain; charset="UTF-8" > > Hi all, > > The Barbican weekly meeting has been fairly sparsely attended for a > > little while now, and the most active contributors these days > > appear to > > be in Asia. > > Its time to consider moving the weekly meeting to a time when more > > contributors can attend. I'm going to propose a couple times below > > to > > start out. > > 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) Tuesday > > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday > > Feel free to propose other days/times. > > Thanks, > > Ade > > P.S. Until decided otherwise, the Barbican meeting remains on > > Mondays > > at 2000 UTC > > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pkovar at redhat.com Fri Feb 16 14:42:16 2018 From: pkovar at redhat.com (Petr Kovar) Date: Fri, 16 Feb 2018 15:42:16 +0100 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> Message-ID: <20180216154216.06843030ae43a131185b875e@redhat.com> On Thu, 15 Feb 2018 09:31:19 +0100 Thomas Goirand wrote: > Hi, > > Since I'm getting some pressure from other DDs to actively remove Py2 > support from my packages, I'm very much considering switching all of the > Debian packages for Queens to using exclusively Py3. I would have like > to read some opinions about this. Is it a good time for such move? I > hope it is, because I'd like to maintain as few Python package with Py2 > support at the time of Debian Buster freeze. > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 > only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > https://review.openstack.org/544809 Hey Thomas, slightly off-topic to this, but would it be a good idea to resurrect OpenStack install guides for Debian if Debian packages are still maintained? Thanks for working on Debian packages. Cheers, pk From liam.young at canonical.com Fri Feb 16 15:04:45 2018 From: liam.young at canonical.com (Liam Young) Date: Fri, 16 Feb 2018 15:04:45 +0000 Subject: [openstack-dev] [charms] Message-ID: Hi, I was recently looking at how to support custom configuration that relies on post deployment setup. Specifically about how to support designate optional configuration for the sink service. The configuration lives on the application units but needs the domain id of the designate domain that the records should be created in. This domain is created post-deployment and, obviously, the uuid of the domain will change on each deployment. I would like to propose doing this through post deployment actions and I think the general approach will be useful across multiple charms. The charm can have pre-defined custom config which can be enabled through an action. The action parameters also provide an additional context for rendering the template which includes data from the post deployment setup. This approach does not allow arbitrary config to be injected, instead it allows predefined config to activated via actions. To illustrate the approach I'll stick with the designate example: 1) Cloud deployed and administrator sets up new domain 2) Administrator runs a new add-sink-config action and passes the domain-id and sink config file name. 3) The lead unit updates a map in the leader db which lists additional config files and corresponding context derived from the action options. Each set of config stores its options in its own namespace. 4) The lead unit then triggers config to be rerendered locally. 5) Non-lead units are triggered by the leader-settings-changed hook and also rerender their configs. Here is a prototype for the designate charm: https://goo.gl/CJj2Rh Any thoughts, objections, you-haven;t-thought-of-this' gratefully recieved Liam From rasca at redhat.com Fri Feb 16 15:12:38 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Fri, 16 Feb 2018 16:12:38 +0100 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> Message-ID: <31269a1f-deee-071a-a247-f155404ce83a@redhat.com> On 16/02/2018 15:41, Wesley Hayutin wrote: [...] > Using galaxy is an option however we would need to make sure that galaxy > is proxied across the upstream clouds. > Another option would be to follow the current established pattern of > adding it to the requirements file [1] > Thanks Bogdan, Raoul! > [1] https://github.com/openstack/tripleo-quickstart/blob/master/quickstart-extras-requirements.txt This is how we're using it today into the internal pipelines, so once we will have the tripleo-ha-utils (or whatever it will be called) it will be just a matter of adding it into the file. In the end I think that once the project will be created either way of using it will be fine. Thanks for your involvement on this folks! -- Raoul Scarazzini rasca at redhat.com From mihaela.balas at orange.com Fri Feb 16 15:40:49 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Fri, 16 Feb 2018 15:40:49 +0000 Subject: [openstack-dev] [Barbican] Keystone Listener error when processing delete project event Message-ID: <553_1518795653_5A86FB85_553_460_1_849F1D1DBD4A00479343403412AE4F8201AB125DE1@ESSEN.office.orange.intra> Hello, The Keystone Listener outputs the below error, over and over again, when processing a delete project event. Do you have any idea why this happens? Happens the same with Ocata and Pike versions. Thank you, Mihaela Balas 2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111624/334860, now - 111625/334860, monotonic - 895085.445269, last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.675 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111625/334860, now - 111625/334863, monotonic - 895085.447218, last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.677 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.677 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources 2018-02-16 15:36:02.678 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.678 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "", RPC method: "" info: "" _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.679 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.679 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111625/334863, now - 111626/334863, monotonic - 895085.450656, last_heartbeat_sent - 895085.450651, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.680 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.681 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.681 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111626/334863, now - 111626/334866, monotonic - 895085.452749, last_heartbeat_sent - 895085.450651, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.681 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.682 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.683 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources 2018-02-16 15:36:02.683 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.684 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "", RPC method: "" info: "" _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.684 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.684 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111626/334866, now - 111627/334866, monotonic - 895085.456135, last_heartbeat_sent - 895085.456126, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.685 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.686 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.686 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111627/334866, now - 111627/334869, monotonic - 895085.457888, last_heartbeat_sent - 895085.456126, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.686 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.687 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.688 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources 2018-02-16 15:36:02.688 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.689 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "", RPC method: "" info: "" _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.689 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.689 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111627/334869, now - 111628/334869, monotonic - 895085.461079, last_heartbeat_sent - 895085.461074, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.690 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.691 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.691 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111628/334869, now - 111628/334872, monotonic - 895085.462863, last_heartbeat_sent - 895085.461074, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.691 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.691 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.692 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.693 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources 2018-02-16 15:36:02.693 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.694 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "", RPC method: "" info: "" _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.694 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.694 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111628/334872, now - 111629/334872, monotonic - 895085.466161, last_heartbeat_sent - 895085.466156, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.695 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.696 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.696 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111629/334872, now - 111629/334875, monotonic - 895085.467837, last_heartbeat_sent - 895085.466156, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.696 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.698 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.698 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources 2018-02-16 15:36:02.699 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.699 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "", RPC method: "" info: "" _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py:1037 2018-02-16 15:36:02.699 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.700 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111629/334875, now - 111630/334875, monotonic - 895085.471446, last_heartbeat_sent - 895085.471435, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.704 1 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:257 2018-02-16 15:36:02.705 1 DEBUG amqp [-] heartbeat_tick : for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 2018-02-16 15:36:02.705 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: 111630/334875, now - 111630/334878, monotonic - 895085.476801, last_heartbeat_sent - 895085.471435, heartbeat int. - 60 for connection ef42486446c34306bd10921b264da26b heartbeat_tick /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 2018-02-16 15:36:02.705 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:72 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event payload = {u'resource_info': u'79d3491d58e542ada54776d2bd68ef7e'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:73 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event type = identity.project.deleted process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:74 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] Input keystone event metadata = {'timestamp': u'2018-02-16 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:75 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] Keystone Event: resource type=project, operation type=deleted, keystone id=79d3491d58e542ada54776d2bd68ef7e process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/keystone_listener.py:80 2018-02-16 15:36:02.706 1 DEBUG barbican.tasks.keystone_consumer [-] Creating KeystoneEventConsumer task processor __init__ /opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py:40 2018-02-16 15:36:02.706 1 DEBUG barbican.model.repositories [-] Getting session... get_session /opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py:353 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources [-] Could not retrieve information needed to process task 'Project cleanup via Keystone notifications'.: TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources Traceback (most recent call last): 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", line 91, in process 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources entity = self.retrieve_entity(*args, **kwargs) 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", line 67, in retrieve_entity 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources suppress_exception=True) 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 586, in find_by_external_project_id 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources session = self.get_session(session) 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 354, in get_session 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources return session or get_session() 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", line 161, in get_session 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources return _SESSION_FACTORY() 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources TypeError: 'NoneType' object is not callable 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Fri Feb 16 15:59:13 2018 From: eumel at arcor.de (Frank Kloeker) Date: Fri, 16 Feb 2018 16:59:13 +0100 Subject: [openstack-dev] [Election] Process Tweaks In-Reply-To: <20180215220735.jb2bd37ghcztgbtf@yuggoth.org> References: <9c8423d191167e4c9811f2740f6a3b2b@arcor.de> <20180215220735.jb2bd37ghcztgbtf@yuggoth.org> Message-ID: <9d4a2e9ab9c07eb6293fc1fb41c55f90@arcor.de> Am 2018-02-15 23:07, schrieb Jeremy Stanley: > On 2018-02-15 22:24:31 +0100 (+0100), Frank Kloeker wrote: > [...] >> There is one task with validation openstackid, which validated the >> given >> email address. Problem is here, translators using different email >> addresses >> for Zanata and it's not possible to validate the user with his name. >> Difficult. > [...] > > As long as the address you get from Zanata appears in at least one > of the E-mail address fields of the contributor's foundation > individual member profile, then the foundation member lookup API > should be able to locate the correct record for validation. In that > regard, it shouldn't be any more of a problem than it is for code > contributors (where at least one of the addresses for their Gerrit > account needs to appear in at least one of the E-mail address fields > for their member profile). Hi fungi, there is no connection between user data in Zanata and user data in the openstackid database. This means I can choose as email address whatever I want in Zanata. Only my openstackid identifier will be stored there. We can advise the user, to configure this email address in the foundation member profile. The same is in stackalytics like there: https://raw.githubusercontent.com/openstack/stackalytics/master/etc/default_data.json kind regards Frank From anne at openstack.org Fri Feb 16 16:23:24 2018 From: anne at openstack.org (Anne Bertucio) Date: Fri, 16 Feb 2018 08:23:24 -0800 Subject: [openstack-dev] [release] Collecting Queens demos In-Reply-To: References: <652102B1-1F30-4D78-A3A1-D7227D8F9829@openstack.org> Message-ID: Hi Kaz, Format is your choice, but we weren’t planning to host demos, just aggregate the links in a single place for readers, so you’ll want to upload to youtube/vimeo/etc and then send. Cheers, Anne Bertucio Marketing and Certification, OpenStack Foundation anne at openstack.org | 206-992-7961 > On Feb 15, 2018, at 8:03 PM, Kaz Shinohara wrote: > > Hi Anne, > > > I'm wondering if I can send a demo video for heat-dashboard which is a > new feature in Queens. > Is there any format of the video ? > > Regards, > Kaz > > > 2018-02-16 8:09 GMT+09:00 Anne Bertucio : >> Hi all, >> >> We’re getting the Queens Release communications ready, and I’ve seen a >> handful of video demos and tutorials of new Queens features. We’d like to >> compile a list of these to share with the marketing community. If you have a >> demo, would you please send a link my way so we can make sure to include it? >> >> If you don’t have a demo and have the time, I’d encourage you to make one of >> a feature you’re really excited about! We’ve heard really positive feedback >> about what’s already out there; people love them! >> >> >> Cheers, >> Anne Bertucio >> OpenStack Foundation >> anne at openstack.org | 206-992-7961 >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete.vandergiessen at canonical.com Fri Feb 16 16:34:21 2018 From: pete.vandergiessen at canonical.com (Pete Vander Giessen) Date: Fri, 16 Feb 2018 16:34:21 +0000 Subject: [openstack-dev] [charms] Incorrect Padding for SSL Cert/Key In-Reply-To: References: Message-ID: Hi All, I came across this thread when troubleshooting a similar problem, and wanted to drop in the solution we came up with for posterity: 1) If you're dealing with an API, and the API comes back with an "incorrect padding" error while parsing an SSL Cert, it usually means that the formatting got munged somewhere. With most of the openstack charms, when specifying an ssl cert in a bundle, you actually need to embed a yaml escaped string inside of your yaml escaped string. I looks something like this: ssl_cert: | | your properly formatted ssl cert goes here. Note that there are two pipes indicating the beginning of a yaml string in the above config setup. You need them both! (Double escaping a big text blob containing special characters is a really common pattern in a lot of APIs -- you generally want to be aware of it, and watch out for it.) 2) For haproxy, you need to specify a service that listens on port 443 in the "services" config key. By default, haproxy will only setup a service listening on port 80. As Adam Collard mentioned, there are some great examples in the haproxy tests: `tests/12_deploy_{trusty,xenial}.py` ~ PeteVG -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 16 16:36:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Feb 2018 16:36:33 +0000 Subject: [openstack-dev] [Election] Process Tweaks In-Reply-To: <9d4a2e9ab9c07eb6293fc1fb41c55f90@arcor.de> References: <9c8423d191167e4c9811f2740f6a3b2b@arcor.de> <20180215220735.jb2bd37ghcztgbtf@yuggoth.org> <9d4a2e9ab9c07eb6293fc1fb41c55f90@arcor.de> Message-ID: <20180216163632.urq5gqittjmhjwgm@yuggoth.org> On 2018-02-16 16:59:13 +0100 (+0100), Frank Kloeker wrote: [...] > there is no connection between user data in Zanata and user data in the > openstackid database. This means I can choose as email address whatever I > want in Zanata. Only my openstackid identifier will be stored there. We can > advise the user, to configure this email address in the foundation member > profile. [...] This is exactly identical to the situation we already have with Gerrit and code contributors. There is no connection between user data in Gerrit and user data in the foundation member database (it's not really the OpenStackID database, OpenStackID is just one component there). A user can choose to enter any E-mail address they like in Gerrit, and we can only advise them to configure their foundation member profile so that it includes that address. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Fri Feb 16 16:49:37 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 16 Feb 2018 17:49:37 +0100 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: <20180216154216.06843030ae43a131185b875e@redhat.com> References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> <20180216154216.06843030ae43a131185b875e@redhat.com> Message-ID: On 02/16/2018 03:42 PM, Petr Kovar wrote: > On Thu, 15 Feb 2018 09:31:19 +0100 > Thomas Goirand wrote: > >> Hi, >> >> Since I'm getting some pressure from other DDs to actively remove Py2 >> support from my packages, I'm very much considering switching all of the >> Debian packages for Queens to using exclusively Py3. I would have like >> to read some opinions about this. Is it a good time for such move? I >> hope it is, because I'd like to maintain as few Python package with Py2 >> support at the time of Debian Buster freeze. >> >> Also, doing Queens, I've noticed that os-xenapi is still full of py2 >> only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: >> >> https://review.openstack.org/544809 > > Hey Thomas, slightly off-topic to this, but would it be a good idea to > resurrect OpenStack install guides for Debian if Debian packages are still > maintained? Yes it would. I'm not sure where to start, since all the doc has moved to individual projects. Cheers, Thomas Goirand (zigo) From Louie.Kwan at windriver.com Fri Feb 16 17:24:19 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Fri, 16 Feb 2018 17:24:19 +0000 Subject: [openstack-dev] [masakari] [notification api] How to clean up or purging of records In-Reply-To: References: <47EFB32CD8770A4D9590812EE28C977E9624F65F@ALA-MBD.corp.ad.wrs.com>, Message-ID: <47EFB32CD8770A4D9590812EE28C977E9624FC85@ALA-MBD.corp.ad.wrs.com> Yeee. Tha't is it. Thanks Louie ________________________________________ From: Bhor, Dinesh [Dinesh.Bhor at nttdata.com] Sent: Thursday, February 15, 2018 7:09 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] [notification api] How to clean up or purging of records Hi Kwan Louie, I think you are looking for this: https://review.openstack.org/#/c/487430/ Thank you, Dinesh Bhor ________________________________________ From: Kwan, Louie Sent: 16 February 2018 02:46:14 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] [notification api] How to clean up or purging of records Hi All, Just wondering, how can we clean up the masakari notification list or purging all old records in the DB? openstack notification list returns too many old records During semi-auto testing, I created a long list of history of records and would like to clean it up and avoid unnecessary actions. Any short term solution is what I am looking for and/or ideas how to extend the CLI is also welcomed so that some of us can extend it later. Thanks, Louie ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gcerami at redhat.com Fri Feb 16 17:35:43 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Fri, 16 Feb 2018 17:35:43 +0000 Subject: [openstack-dev] [tripleo] modifications to the tech debt policy Message-ID: <20180216173543.srg3jh3a2zhv4t33@localhost> Hi, I started circling around technical debts a few months ago, and recently started to propose changes in my team (CI) process on how to manage them. I didn't realize there was already a policy discussed and merged, and I underestimated the audience for my proposal, which cannot be only my team anymore. I'm proposing a modification to the existing policy here https://review.openstack.org/545392 I have tried to add an expanded definition of what we can consider technical debt, the consequences it has to our development tasks, and a process to properly identify, evaluate and decide how much of a risk they pose and how not to let create unrealistic expectations on the time available for the implementation of future features. I hope it's not too late to have a discussion on these modifications at the PTG too Thanks. From colleen at gazlene.net Fri Feb 16 20:01:10 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 16 Feb 2018 21:01:10 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 12 February 2018 Message-ID: # Keystone Team Update - Week of 12 February 2018 ## News Relatively quiet week, the big news is that we're on track for a solid RC2 next week :) ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 37 changes this week. The most significant of these were Queens backports fixing issues with system-scope, needed before RC2. We also merged documentation for application credentials and a bugfix for a long-standing bug in the identity backend[1]. [1] https://bugs.launchpad.net/keystone/+bug/1718747 ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 23 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. While my filter ignores the proposal bot since it's not a human waiting for feedback, as per Sean's countdown notice[2] please be on the lookout for translations proposals so they can be merged ASAP. [2] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127465.html ## Milestone Outlook https://releases.openstack.org/queens/schedule.html We are in the last week of the cycle. We'll release our RC2 next week. We currently seem to be on track for all RC2-targeted bugs[3] but please be on the lookout for any critical bugs that arise in the next week. There's also a bug that we had targeted at Queens-3 but didn't retarget at an RC because we haven't been able to reproduce it yet. If you have spare cycles, please take a look[4]. [3] https://launchpad.net/keystone/+milestone/queens-rc2 [4] https://bugs.launchpad.net/keystone/+bug/1735250 ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From lbragstad at gmail.com Fri Feb 16 20:48:16 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 16 Feb 2018 14:48:16 -0600 Subject: [openstack-dev] [Barbican] Keystone Listener error when processing delete project event In-Reply-To: <553_1518795653_5A86FB85_553_460_1_849F1D1DBD4A00479343403412AE4F8201AB125DE1@ESSEN.office.orange.intra> References: <553_1518795653_5A86FB85_553_460_1_849F1D1DBD4A00479343403412AE4F8201AB125DE1@ESSEN.office.orange.intra> Message-ID: Taking a quick look at the barbican code, it might be that something isn't setting up the _SESSION_FACTORY [0], but I'm certainly not a barbican expert. Might be worth while to open a bug [1]. [0] https://github.com/openstack/barbican/blob/5b525f6b0a7cf5342a9ffa3ca3618028d6d53649/barbican/model/repositories.py#L95-L113 [1] https://bugs.launchpad.net/barbican On Fri, Feb 16, 2018 at 9:40 AM, wrote: > Hello, > > > > > > The Keystone Listener outputs the below error, over and over again, when > processing a delete project event. Do you have any idea why this happens? > Happens the same with Ocata and Pike versions. > > > > Thank you, > > Mihaela Balas > > > > 2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.673 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111624/334860, now - 111625/334860, monotonic - 895085.445269, > last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.675 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.675 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111625/334860, now - 111625/334863, monotonic - 895085.447218, > last_heartbeat_sent - 895085.445263, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.676 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.677 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.677 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.677 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.677 1 ERROR barbican.tasks.resources > > 2018-02-16 15:36:02.678 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: timed out _raise_timeout > /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.678 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: Timeout while waiting on RPC response - > topic: "", RPC method: "" info: "" > _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.679 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.679 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111625/334863, now - 111626/334863, monotonic - 895085.450656, > last_heartbeat_sent - 895085.450651, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.680 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.681 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.681 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111626/334863, now - 111626/334866, monotonic - 895085.452749, > last_heartbeat_sent - 895085.450651, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.681 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.682 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.682 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.683 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.683 1 ERROR barbican.tasks.resources > > 2018-02-16 15:36:02.683 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: timed out _raise_timeout > /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.684 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: Timeout while waiting on RPC response - > topic: "", RPC method: "" info: "" > _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.684 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.684 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111626/334866, now - 111627/334866, monotonic - 895085.456135, > last_heartbeat_sent - 895085.456126, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.685 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.686 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.686 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111627/334866, now - 111627/334869, monotonic - 895085.457888, > last_heartbeat_sent - 895085.456126, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.686 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.687 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.687 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.688 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.688 1 ERROR barbican.tasks.resources > > 2018-02-16 15:36:02.688 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: timed out _raise_timeout > /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.689 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: Timeout while waiting on RPC response - > topic: "", RPC method: "" info: "" > _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.689 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.689 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111627/334869, now - 111628/334869, monotonic - 895085.461079, > last_heartbeat_sent - 895085.461074, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.690 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.691 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.691 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111628/334869, now - 111628/334872, monotonic - 895085.462863, > last_heartbeat_sent - 895085.461074, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.691 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.691 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.692 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.692 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.693 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.693 1 ERROR barbican.tasks.resources > > 2018-02-16 15:36:02.693 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: timed out _raise_timeout > /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.694 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: Timeout while waiting on RPC response - > topic: "", RPC method: "" info: "" > _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.694 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.694 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111628/334872, now - 111629/334872, monotonic - 895085.466161, > last_heartbeat_sent - 895085.466156, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.695 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.696 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.696 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111629/334872, now - 111629/334875, monotonic - 895085.467837, > last_heartbeat_sent - 895085.466156, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.696 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.697 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.698 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.698 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.698 1 ERROR barbican.tasks.resources > > 2018-02-16 15:36:02.699 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: timed out _raise_timeout > /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.699 1 DEBUG oslo.messaging._drivers.impl_rabbit [-] > Timed out waiting for RPC response: Timeout while waiting on RPC response - > topic: "", RPC method: "" info: "" > _raise_timeout /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/impl_rabbit.py:1037 > > 2018-02-16 15:36:02.699 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.700 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111629/334875, now - 111630/334875, monotonic - 895085.471446, > last_heartbeat_sent - 895085.471435, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.704 1 DEBUG oslo_messaging._drivers.amqpdriver [-] > received message with unique_id: 0a407a9a71b641c888c49c0d4674b607 > __call__ /opt/barbican/lib/python2.7/site-packages/oslo_messaging/_ > drivers/amqpdriver.py:257 > > 2018-02-16 15:36:02.705 1 DEBUG amqp [-] heartbeat_tick : for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:678 > > 2018-02-16 15:36:02.705 1 DEBUG amqp [-] heartbeat_tick : Prev sent/recv: > 111630/334875, now - 111630/334878, monotonic - 895085.476801, > last_heartbeat_sent - 895085.471435, heartbeat int. - 60 for connection > ef42486446c34306bd10921b264da26b heartbeat_tick > /opt/barbican/lib/python2.7/site-packages/amqp/connection.py:700 > > 2018-02-16 15:36:02.705 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event publisher_id = identity.keystone-admin-api-2903979735-fsj57 > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:72 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event payload = {u'resource_info': u' > 79d3491d58e542ada54776d2bd68ef7e'} process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:73 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event type = identity.project.deleted process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:74 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] > Input keystone event metadata = {'timestamp': u'2018-02-16 > 15:35:48.506374', 'message_id': u'5cc2ef82-75a7-4ce9-a9eb-573ae008f4e4'} > process_event /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:75 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.queue.keystone_listener [-] > Keystone Event: resource type=project, operation type=deleted, keystone id= > 79d3491d58e542ada54776d2bd68ef7e process_event > /opt/barbican/lib/python2.7/site-packages/barbican/queue/ > keystone_listener.py:80 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.tasks.keystone_consumer [-] > Creating KeystoneEventConsumer task processor __init__ > /opt/barbican/lib/python2.7/site-packages/barbican/tasks/ > keystone_consumer.py:40 > > 2018-02-16 15:36:02.706 1 DEBUG barbican.model.repositories [-] Getting > session... get_session /opt/barbican/lib/python2.7/ > site-packages/barbican/model/repositories.py:353 > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources [-] Could not > retrieve information needed to process task 'Project cleanup via Keystone > notifications'.: TypeError: 'NoneType' object is not callable > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources Traceback (most > recent call last): > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/resources.py", > line 91, in process > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources entity = > self.retrieve_entity(*args, **kwargs) > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/tasks/keystone_consumer.py", > line 67, in retrieve_entity > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources > suppress_exception=True) > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 586, in find_by_external_project_id > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources session = > self.get_session(session) > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 354, in get_session > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources return > session or get_session() > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources File > "/opt/barbican/lib/python2.7/site-packages/barbican/model/repositories.py", > line 161, in get_session > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources return > _SESSION_FACTORY() > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources TypeError: > 'NoneType' object is not callable > > 2018-02-16 15:36:02.707 1 ERROR barbican.tasks.resources > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. > > This message and its attachments may contain confidential or privileged information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. > Thank you. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Fri Feb 16 21:10:16 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 16 Feb 2018 13:10:16 -0800 Subject: [openstack-dev] [chef] State of the Kitchen - 1st Edition Message-ID: This is the first edition of what is going on in Chef OpenStack. The goal is to give a quick overview to see our progress and what is on the menu. Appetizers ======== => Focus is on branching stable/pike and releasing Pike to Supermarket before the end of February if possible. => Tempest will continue to focus on deploying from git instead of packages. This provides a more consistent outcome. => Designate cookbook works with Pike and Queens in Ubuntu. CentOS is WIP. => A deploy guide on using Chef OpenStack in various scenarios is being formulated. Any help here is welcome, even a rubber duck. Entrees ====== => Chef 13 has landed in master (encompassing a staggering 2+ years of deprecations) - https://review.openstack.org/#/q/topic:bp/modern-chef => Test Kitchen is in openstack-chef-repo, with allinone, basic multinode and container-based scenarios. - https://git.openstack.org/cgit/openstack/openstack-chef-repo/ => MariaDB is being sourced from mariadb.org for consistency in outcome. Desserts ======= => Rakefiles are going away in favor of delivery local in Queens. - https://docs.chef.io/delivery_cli.html => Test Kitchen will become the focal point of CI, once we get the right power adapter for Ansible. => Upgrades! Upgr... you get the idea. :-) What's Cooking? ============= => A Bowl of Red measurements are geared for Americans, metric is approximate. adjust where appropriate. -- 4 lbs (1800 g) coarse ground beef -- 1/4 cup (60 ml) beef stock for added flavor and moisture -- 1 oz (28 g) chili powder (without salt, to control salinity) -- 4 or 5 chipotle chiles, minced, with adobo sauce, to taste -- 1 29 oz can (857 ml) of tomato sauce -- 1 tsp (4.7 g) each: kosher salt, ground black and white peppercorns -- 1 tbsp (14.3 g) each: --- onion powder --- paprika --- ground cumin --- ground cayenne --- ground jalapeño -- 1 box baby wipes, any brand Add ingredients to slowcooker, breaking up the meat as you add it. Cook on high for 4 hours, or until the aroma of cumin takes you. Serve straight up, or with shredded cheese and sour cream to tame the heat. Apply baby wipes when appropriate. Gets hotter overnight. Your humble cook, Samuel Cassiba (sc` / scas) From thingee at gmail.com Fri Feb 16 21:06:26 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 16 Feb 2018 13:06:26 -0800 Subject: [openstack-dev] Developer Mailing List Digest February 10-16th Message-ID: <20180216210600.GJ14568@gmail.com> HTML version: https://www.openstack.org/blog/?p=8321 Please help shape the future of the Developer Mailing List Digest with this two question survey: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback Contribute to the Dev Digest by summarizing OpenStack Dev and SIG List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Success Bot Says ================ None for this week. Tell us yours in OpenStack IRC channels using the command "#success " More: https://wiki.openstack.org/wiki/Successes Thanks Bot Says =============== * diablo_rojo on #openstack-101 [0]: spotz for watching the #openstack-101 channel and helping to point newcomers to good resources to get them started :) * fungi on #openstack-infra [1]: dmsimard and mnaser for getting deep-linking in ara working for firefox * fungi on #openstack-infra [2]: to Matt Van Winkle for volunteering to act as internal advocate at Rackspace for our control plane account there! * AJaeger on #openstack-doc [3]: corvus for deleting /draft content * AJaeger on #openstack-infra [4]: cmurphy for your investigation * AJaeger on #openstack-infra [5]: to mordred for laying wonderful groundwork with the tox_siblings work. * smcginnis on #openstack-infra [6]: fungi jeblair mordred AJaeger and other infra-team members for clearing up release job issues * fungi on #openstack-infra [7]: zuul v3 for having such detailed configuration syntax error reporting. * fungi on #openstack-dev [8]: diablo_rojo and persia for smooth but "rocky" ptl elections! * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks [0] - http://eavesdrop.openstack.org/irclogs/%23openstack-101/%23openstack-101.2017-12-13.log.html [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-20.log.html [2] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-09.log.html [3] - http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2018-01-22.log.html [4] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-30.log.html [5] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-03.log.html [6] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-11.log.html [7] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-14.log.html [8] - http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-15.log.html Community Summaries =================== Nova Placement update [0] Release Countdown [1] TC Report [2] Technical Committee Status update [3] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127473.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127465.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127324.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127467.html PTG Bot HOWTO for Dublin ======================== The third PTG is an event where topics of discussion are loosely scheduled in tracks to maximize the attendee productivity. To keep track of what's happening currently we have an event schedule page [0]. Below are some helpful discussions in using PTG bot: Track Leads ----------- Track leads will be able issue various commands [1] in irc channel #openstack-ptg: * #TRACK now - example: #swift now brainstorming improvements to the ring. * Cross project interactions #TRACK now : - #nova now discussing #cinder interactions * What's next #TRACK next : - #api-sig next at 2pm we'll be discussing pagination woes * Clear all now and next entries for a track #TRACK clean: - #ironic clean Booking Reservable Rooms ------------------------ Reservable rooms and what's being discussed works the same with it showing on the event schedule page [0]. Different set of commands: * Get the slot codes with the book command #TRACK book: * Book a room with #TRACK book - example: #relmgt book Coiste Bainisti-MonP2 Any track can book additional space. These slots are 1 hour and 45 minutes long. You can ask ttx, diablo_rojo or #openstack-infra to add a track that's missing. Keep in mind various teams will be soley relying on this for space at the PTG. Additional commands can be found in the PTG bot README [1]. [0] - http://ptg.openstack.org/ptg.html [1] - https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst Full messages: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127413.html and http://lists.openstack.org/pipermail/openstack-dev/2018-February/127414.html PTL Election Results and Conclusions ==================================== PTL election is over and the results are in [0]! Congrats to returning and new PTLs! There were three elections that took place: * Kolla [1] * Mistral [2] * Quality Assurance [3] On the statistics side, we renewed 17 of the 64 PTLs, so around 27%. Our usual renewal rate is more around 35%, but we did renew more at the last elections (40%) so this is likely why we didn't renew as much as usual this time. Much thanks to our election officials for carrying out this important responsibility in our community! [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127404.html [1] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab [2] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab [3] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_274f37d8e5497358 Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127404 Election Process Tweaks ======================= Discussions have started with ways to improve our election process. Current scripts in place have become brittle that are needed for governance documentation building that use gerrit lookup functions. Election officials currently have to make changes to an exception file [0] when email address with foundation accounts don't match gerrit. Discussed improvements include: * Uncouple TC and PTL election processes. * Make TC and PTL validation functions separate. * Change how-to-submit-candidacy directions to requires candidates email address to match their gerrit and foundation account. Comments, concerns and better ideas are welcome. The plan is to schedule time at the PTG to start hacking on some of those items so feedback before then would be appreciated by your election officials! [0] - http://git.openstack.org/cgit/openstack/election/tree/exceptions.txt Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127435 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From thingee at gmail.com Fri Feb 16 21:42:01 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 16 Feb 2018 13:42:01 -0800 Subject: [openstack-dev] [ptg] [contributor-guide] Contributor Guide Discussion and Hacking Sessions Message-ID: <20180216214201.GM14568@gmail.com> Hey all, Our set of Contributor Guides [0] are making progress in providing our community with content for on-boarding new contributors of different types of work. At the PTG we'll be sharing space with the Documentation and i18n teams [2]. On Tuesday at 9:00-10:15 AM local time we'll be discussing next steps with the Contributor Guide of what's left from our StoryBoard tasks [3] and the current vision. There will also be impromptu meetup/hacking sessions [4] happening Monday thru Wednesday on the Contributor Guide with the OpenStack Upstream Institute team, who are interested in using this content for future events. You can read about contributing to the Contributor Guide for help [5]. I will be unable to physically attend the PTG this time, but Kendall Nelson (diablo_rojo on IRC) will be around to help lead these sessions. I will however be around on #openstack-doc to help with reviews or discussions in the mentioned impromptu hack times. Thanks everyone! [1] - https://docs.openstack.org/contributors/ [2] - https://etherpad.openstack.org/p/docs-i18n-ptg-rocky [3] - https://storyboard.openstack.org/#!/project/913 [4] - https://etherpad.openstack.org/p/OUI-Rocky-PTG [5] - https://docs.openstack.org/contributors/contributing -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From melwittt at gmail.com Fri Feb 16 22:40:14 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 16 Feb 2018 14:40:14 -0800 Subject: [openstack-dev] [nova] signing up as a bug tag owner In-Reply-To: <1518790269.19368.2@smtp.office365.com> References: <1518790269.19368.2@smtp.office365.com> Message-ID: <4AFDFBC7-3444-4E0A-B7D0-C7180971A310@gmail.com> > On Feb 16, 2018, at 06:11, Balázs Gibizer wrote: > > Hi, > > On the weekly meeting melwitt suggested [1] to have people signed up for certain bug tags. I've already been trying to follow the bugs tagged with the 'notifications' tag so I sign up for this tag. > > Cheers, > gibi > > [1]http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-02-15-21.01.log.html#l-86 Fantastic, gibi! I’ve added a new row to the Tag Owner table for ‘notifications' bugs and added you as the tag owner. For anyone else who is interested in helping out, please see the wiki [2] for instructions on how to help with bug triage in nova. The easiest thing to do is to tag bugs with a category if they have not been tagged yet. This is meant to be a low time/low effort activity to put bugs into buckets for domain experts (bug tag owners) to triage as the next step. Tags are things like ‘api’, ‘volumes’, ‘scheduler’, ‘libvirt’, ‘xenapi’, ‘placement’, ‘notifications’, and so on. If there’s an area of nova you have familiarity with, please consider joining as a bug tag owner and help determine validity and severity of bugs in your area of expertise. There can be several owners for a tag, so feel free to pitch in even if a tag already has an owner. The idea with tag owners is to spread the task of bug triage among our team and it's more “fun(TM)” to triage bugs in an area in which you are familiar. I see a lot of untagged bugs since the weekly meeting have already been tagged with categories and we have a couple of new bug tag owners, so thank you for the help! Best, -melanie [2] https://wiki.openstack.org/wiki/Nova/BugTriage From fungi at yuggoth.org Fri Feb 16 23:57:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Feb 2018 23:57:26 +0000 Subject: [openstack-dev] [chef] State of the Kitchen - 1st Edition In-Reply-To: References: Message-ID: <20180216235725.cim6xiyps4uifghb@yuggoth.org> On 2018-02-16 13:10:16 -0800 (-0800), Samuel Cassiba wrote: > This is the first edition of what is going on in Chef OpenStack. [...] I want to commend you on an excellent read. These team update summaries which have started to emerge are helping to draw me into what's going on in so many areas of the community I would otherwise have missed. Please keep it up! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ksnhr.tech at gmail.com Sat Feb 17 00:15:36 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Sat, 17 Feb 2018 09:15:36 +0900 Subject: [openstack-dev] [release] Collecting Queens demos In-Reply-To: References: <652102B1-1F30-4D78-A3A1-D7227D8F9829@openstack.org> Message-ID: Hi Anne, Noted with thanks. We will make a demo video & upload it to YouTube. Let you know when it will be ready. Cheers, Kaz 2018-02-17 1:23 GMT+09:00 Anne Bertucio : > Hi Kaz, > > Format is your choice, but we weren’t planning to host demos, just aggregate > the links in a single place for readers, so you’ll want to upload to > youtube/vimeo/etc and then send. > > Cheers, > Anne Bertucio > Marketing and Certification, OpenStack Foundation > anne at openstack.org | 206-992-7961 > > > > > On Feb 15, 2018, at 8:03 PM, Kaz Shinohara wrote: > > Hi Anne, > > > I'm wondering if I can send a demo video for heat-dashboard which is a > new feature in Queens. > Is there any format of the video ? > > Regards, > Kaz > > > 2018-02-16 8:09 GMT+09:00 Anne Bertucio : > > Hi all, > > We’re getting the Queens Release communications ready, and I’ve seen a > handful of video demos and tutorials of new Queens features. We’d like to > compile a list of these to share with the marketing community. If you have a > demo, would you please send a link my way so we can make sure to include it? > > If you don’t have a demo and have the time, I’d encourage you to make one of > a feature you’re really excited about! We’ve heard really positive feedback > about what’s already out there; people love them! > > > Cheers, > Anne Bertucio > OpenStack Foundation > anne at openstack.org | 206-992-7961 > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Sat Feb 17 01:36:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 16 Feb 2018 17:36:31 -0800 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version Message-ID: Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy aggregation. The commit that broke it is https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 But Alex and I figured (after a strong headache) that we needed to update the query like this: https://review.openstack.org/545498 It would be great to avoid this kind of change within minor versions, please please. Happy weekend, PS: I'm adding YAQL to my linkedin profile right now. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Feb 17 06:59:52 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 16 Feb 2018 22:59:52 -0800 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 5:36 PM, Emilien Macchi wrote: [...] > But Alex and I figured (after a strong headache) that we needed to update > the query like this: https://review.openstack.org/545498 > To be fully transparent, Alex and I went ahead and merged the patch but I have one doubt about this log when I'm testing a containerized undercloud: http://logs.openstack.org/06/542906/33/check/tripleo-ci-centos-7-containers-multinode/f262b7c/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-02-17_03_11_07 Note this is only happening when we deploy a containerized overcloud with config-download on top of a containerized undercloud. I'll keep investigating during the weekend but it's maybe related to the yaql query change. Any inputs/feedback is welcome on that, Thanks. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Feb 17 11:55:21 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 17 Feb 2018 20:55:21 +0900 Subject: [openstack-dev] [nova] signing up as a bug tag owner In-Reply-To: <4AFDFBC7-3444-4E0A-B7D0-C7180971A310@gmail.com> References: <1518790269.19368.2@smtp.office365.com> <4AFDFBC7-3444-4E0A-B7D0-C7180971A310@gmail.com> Message-ID: On Sat, Feb 17, 2018 at 7:40 AM, melanie witt wrote: >> On Feb 16, 2018, at 06:11, Balázs Gibizer wrote: >> >> Hi, >> >> On the weekly meeting melwitt suggested [1] to have people signed up for certain bug tags. I've already been trying to follow the bugs tagged with the 'notifications' tag so I sign up for this tag. >> >> Cheers, >> gibi >> >> [1]http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-02-15-21.01.log.html#l-86 > > Fantastic, gibi! I’ve added a new row to the Tag Owner table for ‘notifications' bugs and added you as the tag owner. > > For anyone else who is interested in helping out, please see the wiki [2] for instructions on how to help with bug triage in nova. > > The easiest thing to do is to tag bugs with a category if they have not been tagged yet. This is meant to be a low time/low effort activity to put bugs into buckets for domain experts (bug tag owners) to triage as the next step. Tags are things like ‘api’, ‘volumes’, ‘scheduler’, ‘libvirt’, ‘xenapi’, ‘placement’, ‘notifications’, and so on. > > If there’s an area of nova you have familiarity with, please consider joining as a bug tag owner and help determine validity and severity of bugs in your area of expertise. There can be several owners for a tag, so feel free to pitch in even if a tag already has an owner. The idea with tag owners is to spread the task of bug triage among our team and it's more “fun(TM)” to triage bugs in an area in which you are familiar. > > I see a lot of untagged bugs since the weekly meeting have already been tagged with categories and we have a couple of new bug tag owners, so thank you for the help! Thanks melwitt. I think you already have my name for api on wiki page. I will be putting some dedicated time to triage those. even though there is only 1 New bug but there are total 61 bugs for api [1] and i am sure there might be some cleanup needed for in-progress bugs etc. ..1 https://bugs.launchpad.net/nova/+bugs?field.tag=api -gmann > > Best, > -melanie > > [2] https://wiki.openstack.org/wiki/Nova/BugTriage > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From juliaashleykreger at gmail.com Sat Feb 17 18:39:10 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 17 Feb 2018 18:39:10 +0000 Subject: [openstack-dev] [ironic] evening gathering at the PTG Message-ID: Greetings Ironicers! Thanks to derekh, we have a reservation for our evening gathering at the PTG! We will be gathering at Fegan’s Pub on Tuesday the 27th at 7 PM. 146 Drumcondra Rd Lower Drumcondra, Dublin 9 http://faganspub.ie If anyone is interested in joining us that has not previously let us know, please let us know so we can update our reservation. Thanks, -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sat Feb 17 18:47:02 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 17 Feb 2018 13:47:02 -0500 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. Message-ID: Hi all, We have contributors submit patches [1] about switching over from 'source' to '.'. Frankly, it is a bit confused for reviewers to review those patches since it is unclear what are the rationals of the change. By tracing down to the patch [2] that introduced this convention, unfortunately, it doesn't help since there is not too much information in the commit message. Moreover, this convention doesn't seem to be followed very well in the community. I saw devstack is still using 'source' instead of '.' [3], which contradicts to what the docs said [4]. If anyone can clarify the rationals of this convention, it will be really helpful. [1] https://review.openstack.org/#/c/543155/ [2] https://review.openstack.org/#/c/304545/3 [3] https://github.com/openstack-dev/devstack/blob/master/stack.sh#L592 [4] https://docs.openstack.org/doc-contrib-guide/writing-style/code-conventions Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Feb 17 20:40:53 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 17 Feb 2018 12:40:53 -0800 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 10:59 PM, Emilien Macchi wrote: > On Fri, Feb 16, 2018 at 5:36 PM, Emilien Macchi > wrote: > [...] > >> But Alex and I figured (after a strong headache) that we needed to update >> the query like this: https://review.openstack.org/545498 >> > > To be fully transparent, Alex and I went ahead and merged the patch but I > have one doubt about this log when I'm testing a containerized undercloud: > http://logs.openstack.org/06/542906/33/check/tripleo-ci- > centos-7-containers-multinode/f262b7c/logs/undercloud/home/ > zuul/overcloud_deploy.log.txt.gz#_2018-02-17_03_11_07 > > Note this is only happening when we deploy a containerized overcloud with > config-download on top of a containerized undercloud. > I'll keep investigating during the weekend but it's maybe related to the > yaql query change. > I updated the heat-engine container on the undercloud with the new version of YAQL and it worked fine (since overcloud.yaml is generated when creating the overcloud Heat stack, it makes sense). Which means the patch works as expected. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 17 21:03:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 17 Feb 2018 21:03:13 +0000 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: References: Message-ID: <20180217210312.mv43be7re73vac2i@yuggoth.org> On 2018-02-17 13:47:02 -0500 (-0500), Hongbin Lu wrote: [...] > If anyone can clarify the rationals of this convention, it will be > really helpful. [...] There's a trade-off here: while `.` is standardized in POSIX sh (under Utilities, Dot in the specification), it's easy to miss when reading documentation and/or cutting and pasting from examples. On the other hand, `source` is easier to see but was originally unique to csh (which lacks `.`) and subsequently borrowed by the bash shell environment as an alias for `.` ostensibly to ease migration for users of csh and its derivatives. The `source` command is not implemented by a number of other popular shells however, which may make it a poor interoperability choice (given csh is an arguably less popular shell these days) unless we assume a specific shell (e.g., bash). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dprince at redhat.com Sat Feb 17 21:40:12 2018 From: dprince at redhat.com (Dan Prince) Date: Sat, 17 Feb 2018 16:40:12 -0500 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: Thanks for the update Emilien. A couple of things to add: 1) This was really difficult to pin-point via the Heat stack error message ('list index out of range'). I actually had to go and add LOG.debug statements to Heat to get to the bottom of it. I aim to sync with a few of the Heat folks next week on this to see if we can do better here. 2) I had initially thought it would have been much better to revert the (breaking) change to python-yaql. That said it was from 2016! So I think our window of opportunity for the revert is probably way too large to consider that. Sounds like we need to publish the yaql package more often in RDO, etc. So your patch to update our queries is probably our only option. On Fri, Feb 16, 2018 at 8:36 PM, Emilien Macchi wrote: > Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy > aggregation. > > The commit that broke it is > https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 > > It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 > But Alex and I figured (after a strong headache) that we needed to update > the query like this: https://review.openstack.org/545498 > > It would be great to avoid this kind of change within minor versions, please > please. > > Happy weekend, > > PS: I'm adding YAQL to my linkedin profile right now. Be careful here. Do you really want to write YAQL queries all day! Dan > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Sun Feb 18 00:35:36 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sun, 18 Feb 2018 11:35:36 +1100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: <20180218003536.GY23143@thor.bakeyournoodle.com> On Sat, Feb 17, 2018 at 04:40:12PM -0500, Dan Prince wrote: > Thanks for the update Emilien. A couple of things to add: > > 1) This was really difficult to pin-point via the Heat stack error > message ('list index out of range'). I actually had to go and add > LOG.debug statements to Heat to get to the bottom of it. I aim to sync > with a few of the Heat folks next week on this to see if we can do > better here. > > 2) I had initially thought it would have been much better to revert > the (breaking) change to python-yaql. That said it was from 2016! So I > think our window of opportunity for the revert is probably way too > large to consider that. Sounds like we need to publish the yaql > package more often in RDO, etc. So your patch to update our queries is > probably our only option. I'm keen to sit down at the PTG for a quick discussion on how the requirements team can better support RDO (and therefore tripleo) to test these OpenStack deliverables sooner. As you point out the commit was from Aug 2016, which was released in Mar 2017. It was added to upper-constraints almost immediately and global-requirements (as the minimum supported version) in Aug 2017. However, it was only recently added to RDO. So if there is anything we can do on the requirements team to signal these changes that isn't being done already lets work out what it is :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mordred at inaugust.com Sun Feb 18 09:55:51 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sun, 18 Feb 2018 03:55:51 -0600 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: <20180217210312.mv43be7re73vac2i@yuggoth.org> References: <20180217210312.mv43be7re73vac2i@yuggoth.org> Message-ID: <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> On 02/17/2018 03:03 PM, Jeremy Stanley wrote: > On 2018-02-17 13:47:02 -0500 (-0500), Hongbin Lu wrote: > [...] >> If anyone can clarify the rationals of this convention, it will be >> really helpful. > [...] > > There's a trade-off here: while `.` is standardized in POSIX sh > (under Utilities, Dot in the specification), it's easy to miss when > reading documentation and/or cutting and pasting from examples. On > the other hand, `source` is easier to see but was originally unique > to csh (which lacks `.`) and subsequently borrowed by the bash shell > environment as an alias for `.` ostensibly to ease migration for > users of csh and its derivatives. The `source` command is not > implemented by a number of other popular shells however, which may > make it a poor interoperability choice (given csh is an arguably > less popular shell these days) unless we assume a specific shell > (e.g., bash). I'd honestly argue in favor of assuming bash and using 'source' because it's more readable. We don't make allowances for alternate shells in our examples anyway. I personally try to use 'source' vs . and $() vs. `` as aggressively as I can. That said - I completely agree with fungi on the description of the tradeoffs of each direction, and I do think it's valuable to pick one for the docs. From sean.mcginnis at gmx.com Sun Feb 18 11:15:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 18 Feb 2018 05:15:18 -0600 Subject: [openstack-dev] [kolla]Fwd: [Openstack-stable-maint] Stable check of openstack/kolla failed References: Message-ID: <5548F4AD-589D-45A4-AE69-DFCEB68B1216@gmx.com> Hello kolla team, It looks like stable builds for kolla have been failing for some time now. Just forwarding this on to make sure the team is aware of it before the need for a stable release comes up. Thanks, Sean > Begin forwarded message: > > From: "A mailing list for the OpenStack Stable Branch test reports." > Subject: [Openstack-stable-maint] Stable check of openstack/kolla failed > Date: February 18, 2018 at 01:51:32 CST > To: openstack-stable-maint at lists.openstack.org > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/build-openstack-sphinx-docs/b0f5081/html/ : SUCCESS in 2m 50s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/openstack-tox-py27/b6577a4/ : SUCCESS in 2m 27s > - kolla-publish-centos-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-source/29e3a2b/ : POST_FAILURE in 1h 16m 44s > - kolla-publish-centos-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-binary/dc12a93/ : POST_FAILURE in 1h 08m 48s (non-voting) > - kolla-publish-ubuntu-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-source/dd419ce/ : POST_FAILURE in 56m 40s > - kolla-publish-ubuntu-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-binary/a7fca30/ : POST_FAILURE in 52m 05s (non-voting) > - kolla-publish-oraclelinux-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-source/7e87ef0/ : POST_FAILURE in 1h 35m 59s > - kolla-publish-oraclelinux-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-binary/56eb5bd/ : POST_FAILURE in 1h 42m 57s (non-voting) > > _______________________________________________ > Openstack-stable-maint mailing list > Openstack-stable-maint at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sun Feb 18 11:16:49 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 18 Feb 2018 05:16:49 -0600 Subject: [openstack-dev] [zaqar] Fwd: [Openstack-stable-maint] Stable check of openstack/zaqar failed References: Message-ID: <07003F21-AD51-4645-AA00-B510DA52A236@gmx.com> Hello zaqar team, It looks like stable jobs have been failing for some time now for zaqar stable branches. Just forwarding on to make sure the team is aware of this. Thanks, Sean > Begin forwarded message: > > From: "A mailing list for the OpenStack Stable Branch test reports." > Subject: [Openstack-stable-maint] Stable check of openstack/zaqar failed > Date: February 18, 2018 at 01:03:06 CST > To: openstack-stable-maint at lists.openstack.org > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/zaqar/stable/ocata/build-openstack-sphinx-docs/1e8b5bb/html/ : SUCCESS in 4m 08s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/zaqar/stable/ocata/openstack-tox-py27/db30e4f/ : TIMED_OUT in 40m 44s > > _______________________________________________ > Openstack-stable-maint mailing list > Openstack-stable-maint at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Feb 18 16:01:52 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 18 Feb 2018 16:01:52 +0000 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> References: <20180217210312.mv43be7re73vac2i@yuggoth.org> <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> Message-ID: <20180218160151.4m6yzuvd7pdq7c2c@yuggoth.org> On 2018-02-18 03:55:51 -0600 (-0600), Monty Taylor wrote: [...] > I'd honestly argue in favor of assuming bash and using 'source' > because it's more readable. We don't make allowances for alternate > shells in our examples anyway. > > I personally try to use 'source' vs . and $() vs. `` as > aggressively as I can. > > That said - I completely agree with fungi on the description of > the tradeoffs of each direction, and I do think it's valuable to > pick one for the docs. Yes, it's not my call but I too would prefer more readable examples over a strict adherence to POSIX. As long as we say somewhere that our examples assume the user is in a GNU bash(1) environment and that the examples may require minor adjustment for other shells, I think that's a perfectly reasonable approach. If there's a documentation style guide, that too would be a great place to encourage examples following certain conventions such as source instead of ., $() instead of ``, [] instead of test, an so on... and provide a place to explain the rationale so that reviewers have a convenient response they can link for bulk "improvements" which seem to indicate ignorance of our reasons for these choices. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emilien at redhat.com Sun Feb 18 18:18:23 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 18 Feb 2018 10:18:23 -0800 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: <1518518420.15968.6.camel@redhat.com> Message-ID: This is an update on what has been achieved this week with the regard of Containerized Undercloud efforts in TripleO: TL;DR: really good efforts have been made and we can now deploy a full (multinode) overcloud in CI. OVB testing in progress and lot of remaining items! ## Bugfixes docker-registry: add missing firewall rules - https://review.openstack.org/#/c/545185/ mistral-executor: mount /var/lib/mistral - https://review.openstack.org/#/c/545143/ docker: configure group/user for deployment_user - https://review.openstack.org/#/c/544761/ + dependencies Fix PublicVirtualFixedIPs in envs - https://review.openstack.org/#/c/544744/ Align zaqar max_messages_post_size with undercloud - https://review.openstack.org/#/c/544756/ undercloud_post: fix subnet name - https://review.openstack.org/#/c/544587/ ## CI We manage to run a containerized overcloud deployed by a containerized undercloud in CI, results can be seen here: https://review.openstack.org/#/c/542906/ The job is running on featureser010 now (for testing purpose) but as James mentioned in the review, we won't switch this job to run a containerized undercloud. Note there is no impact on the job runtime. We'll need to properly deprecate the non-containerized undercloud first but we'll need to find a CI job that we can use for gating, so we avoid regression during the cycle. Now we're working on deploying featureset001 (ovb-ha), with TLS, net-iso, Ironic/Nova/Neutron (baremetal bits) from a containerized undercloud: https://review.openstack.org/#/c/542556/ It's not working yet but we're working toward the blockers as they come during testing. # TLS Support All patches that were in progress have been merged, and now under testing in ovb-ha + containerized u/c (see above). # UI Support Work is still in progress, patches are ready for review, but some one them don't pass pep8 yet. We'll hopefully fix it soon. # Other items routed ctlplane networking: Harald is currently making progress on the items, some patches are ready for review. Create temp copy of tripleo-heat-templates before processing them: Bogdan is working on https://review.openstack.org/#/c/542875 - the patch is under review! Upgrades: no work has been started so far but we'll probably discuss about this topic during the PTG. As usual please comment or add anything that I missed. Thanks all for your help/reviews/efforts so far, Emilien On Tue, Feb 13, 2018 at 6:41 AM, Emilien Macchi wrote: > > > On Tue, Feb 13, 2018 at 2:40 AM, Harald Jensås wrote: > >> On Fri, 2018-02-09 at 14:39 -0800, Emilien Macchi wrote: >> > On Fri, Feb 9, 2018 at 2:30 PM, James Slagle >> > wrote: >> > [...] >> > >> > > You may want to add an item for the routed ctlplane work that >> > > landed >> > > at the end of Queens. Afaik, that will need to be supported with >> > > the >> > > containerized undercloud. >> > >> > Done: https://trello.com/c/kFtIkto1/17-routed-ctlplane-networking >> > >> >> Tanks Emilien, >> >> >> I added several work items to the Trello card, and a few patches. Still >> WiP. >> >> Do we have any CI that use containerized undercloud with actual Ironic >> deployement? Or are they all using deployed-server? >> >> E.g do we have anything actually testing this type of change? >> https://review.openstack.org/#/c/543582 >> >> I belive that would have to be an ovb job with containerized undercloud? >> > > I'm working on it since last week: https://trello.com/c/ > uLqbHTip/13-switch-other-jobs-to-run-a-containerized-undercloud > But currently trying to make things stable again, we introduce regressions > and this is high prio now. > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Feb 18 20:44:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 18 Feb 2018 15:44:04 -0500 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: <20180218160151.4m6yzuvd7pdq7c2c@yuggoth.org> References: <20180217210312.mv43be7re73vac2i@yuggoth.org> <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> <20180218160151.4m6yzuvd7pdq7c2c@yuggoth.org> Message-ID: <1518986610-sup-9087@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-02-18 16:01:52 +0000: > On 2018-02-18 03:55:51 -0600 (-0600), Monty Taylor wrote: > [...] > > I'd honestly argue in favor of assuming bash and using 'source' > > because it's more readable. We don't make allowances for alternate > > shells in our examples anyway. > > > > I personally try to use 'source' vs . and $() vs. `` as > > aggressively as I can. > > > > That said - I completely agree with fungi on the description of > > the tradeoffs of each direction, and I do think it's valuable to > > pick one for the docs. > > Yes, it's not my call but I too would prefer more readable examples > over a strict adherence to POSIX. As long as we say somewhere that > our examples assume the user is in a GNU bash(1) environment and > that the examples may require minor adjustment for other shells, I > think that's a perfectly reasonable approach. If there's a > documentation style guide, that too would be a great place to > encourage examples following certain conventions such as source > instead of ., $() instead of ``, [] instead of test, an so on... and > provide a place to explain the rationale so that reviewers have a > convenient response they can link for bulk "improvements" which seem > to indicate ignorance of our reasons for these choices. I've proposed reverting the style-guide change that seems to have led to this discussion in https://review.openstack.org/#/c/545718/2 Doug From natsume.takashi at lab.ntt.co.jp Sun Feb 18 22:56:51 2018 From: natsume.takashi at lab.ntt.co.jp (Takashi Natsume) Date: Mon, 19 Feb 2018 07:56:51 +0900 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> References: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> Message-ID: <6ff0919e-fb4f-3dc4-1ece-8c10da273724@lab.ntt.co.jp> Thank you, Matt and everyone. But I would like to become a core reviewer for the nova project as well as python-novaclient. I have contributed more in the nova project than python-novaclient. I have done total 2,700+ reviews for the nova project in all releases (*1). (Total 115 reviews only for python-novaclient.) *1: http://stackalytics.com/?release=all&user_id=natsume-takashi On 2018/02/16 2:18, Matt Riedemann wrote: > On 2/9/2018 9:01 AM, Matt Riedemann wrote: >> I'd like to add Takashi to the python-novaclient core team. >> >> python-novaclient doesn't get a ton of activity or review, but Takashi >> has been a solid reviewer and contributor to that project for quite >> awhile now: >> >> http://stackalytics.com/report/contribution/python-novaclient/180 >> >> He's always fast to get new changes up for microversion support and >> help review others that are there to keep moving changes forward. >> >> So unless there are objections, I'll plan on adding Takashi to the >> python-novaclient-core group next week. > > I've added Takashi to python-novaclient-core: > > https://review.openstack.org/#/admin/groups/572,members > > Thanks everyone. > Regards, Takashi Natsume NTT Software Innovation Center E-mail: natsume.takashi at lab.ntt.co.jp From mriedemos at gmail.com Mon Feb 19 00:31:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 18 Feb 2018 18:31:47 -0600 Subject: [openstack-dev] [keystone] Queens RC review dashboard In-Reply-To: <7b4c6301-790d-6c98-ff7a-15a4d312427f@gmail.com> References: <7b4c6301-790d-6c98-ff7a-15a4d312427f@gmail.com> Message-ID: <3380bab1-4a1b-6729-07bd-eaf19f9f4a7d@gmail.com> On 2/1/2018 9:51 AM, Lance Bragstad wrote: > Just like with feature freeze, I put together a review dashboard that > contains patches we need to land in order to cut a release candidate > [0]. I'll be adding more patches throughout the day, but so far there > are 21 changes there waiting for review. If there is something I missed, > please don't hesitate to ping me and I'll get it added. Thanks for all > the hard work. We're on the home stretch! > > [0]https://goo.gl/XVw3wr I reviewed your open stable/queens changes, left a question in one about how you want to handle the 'fixes' release note. I thought since I'm stable core I could +2 these but I can't, looks like keystone-stable-maint or the release core team can do that. -- Thanks, Matt From emilien at redhat.com Mon Feb 19 03:25:07 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 18 Feb 2018 19:25:07 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> Message-ID: I just realized that the new syntax doesn't work when third party jobs use an old version of Zuul (e.g. RDO RCI). Which means: Depends-On: https://review.openstack.org/#/c/542556/ doesn't work and Depends-On: Ia30965b362d1c05d216f59b4cc1b3cb7e1284046 works for third party jobs. We have to be very careful how we use the feature in TripleO CI. I've lost a bit of time trying to figuring out why my code wasn't passing our functional tests when I realized my code wasn't properly checkout. My recommendation for TripleO devs: use the old syntax if you want your code to be tested by RDO Third party CI (now voting btw). Thanks, On Mon, Feb 5, 2018 at 1:11 PM, Alex Schultz wrote: > On Thu, Feb 1, 2018 at 11:55 AM, James E. Blair > wrote: > > Zane Bitter writes: > > > >> Yeah, it's definitely nice to have that flexibility. e.g. here is a > >> patch that wouldn't merge for 3 months because the thing it was > >> dependent on also got proposed as a backport: > >> > >> https://review.openstack.org/#/c/514761/1 > >> > >> From an OpenStack perspective, it would be nice if a Gerrit ID implied > >> a change from the same Gerrit instance as the current repo and the > >> same branch as the current patch if it exists (otherwise any branch), > >> and we could optionally use a URL instead to select a particular > >> change. > > > > Yeah, that's reasonable, and it is similar to things Zuul does in other > > areas, but I think one of the thing we want to do with Depends-On is > > consider that Zuul isn't the only audience. It's there just as much for > > the reviewers, and other folks. So when it comes to Gerrit change ids, > > I feel we had to constrain it to Gerrit's own behavior. When you click > > on one of those in Gerrit, it shows you all of the changes across all of > > the repos and branches with that change-id. So that result list is what > > Zuul should work with. Otherwise there's a discontinuity between what a > > user sees when they click the hyperlink under the change-id and what > > Zuul does. > > > > Similarly, in the new system, you click the URL and you see what Zuul is > > going to use. > > > > And that leads into the reason we want to drop the old syntax: to make > > it seamless for a GitHub user to know how to Depends-On a Gerrit change, > > and vice versa, with neither requiring domain-specific knowledge about > > the system. > > > > While I can appreciate that, having to manage urls for backports in > commit messages will lead to missing patches and other PEBAC related > problems. Perhaps rather than throwing out this functionality we can > push for improvements in the gerrit interaction itself? I'm really -1 > on removing the change-id syntax just for this reasoning. The UX of > having to manage complex depends-on urls for things like backports > makes switching to URLs a non-starter unless I have a bunch of > external system deps (and I generally don't). > > Thanks, > -Alex > > > -Jim > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Feb 19 03:26:42 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 18 Feb 2018 19:26:42 -0800 Subject: [openstack-dev] [tripleo] Usage of Depends-On in TripleO CI Message-ID: Just an FYI if you haven't seen the thread on openstack-dev. ---------- Forwarded message ---------- From: Emilien Macchi Date: Sun, Feb 18, 2018 at 7:25 PM Subject: Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> I just realized that the new syntax doesn't work when third party jobs use an old version of Zuul (e.g. RDO RCI). Which means: Depends-On: https://review.openstack.org/#/c/542556/ doesn't work and Depends-On: Ia30965b362d1c05d216f59b4cc1b3cb7e1284046 works for third party jobs. We have to be very careful how we use the feature in TripleO CI. I've lost a bit of time trying to figuring out why my code wasn't passing our functional tests when I realized my code wasn't properly checkout. My recommendation for TripleO devs: use the old syntax if you want your code to be tested by RDO Third party CI (now voting btw). Thanks, [...] -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Feb 19 07:14:46 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Feb 2018 08:14:46 +0100 Subject: [openstack-dev] [all][i18n] Accepting translations for Queens Message-ID: <21fa8c46-b6e6-990b-1ffa-d40bed01b1da@suse.com> Could everybody import translations, please? * for master so that you have translated releasenotes * for stable/queens for the iminent Queens release. Note that this one removes translated releasenotes, we only translate and publish releasenotes from master. Full list of open reviews: https://review.openstack.org/#/q/status:open+topic:zanata/translations Thanks, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From mrunge at redhat.com Mon Feb 19 07:27:49 2018 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 19 Feb 2018 08:27:49 +0100 Subject: [openstack-dev] [kolla]Fwd: [Openstack-stable-maint] Stable check of openstack/kolla failed In-Reply-To: <5548F4AD-589D-45A4-AE69-DFCEB68B1216@gmx.com> References: <5548F4AD-589D-45A4-AE69-DFCEB68B1216@gmx.com> Message-ID: <20180219072749.ez7w63kii3zs7kgs@sofja.berg.ol> On Sun, Feb 18, 2018 at 05:15:18AM -0600, Sean McGinnis wrote: > Hello kolla team, > > It looks like stable builds for kolla have been failing for some time now. Just forwarding this on to make sure the team is aware of it before the need for a stable release comes up. > > > Build failed. > > > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/build-openstack-sphinx-docs/b0f5081/html/ : SUCCESS in 2m 50s > > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/openstack-tox-py27/b6577a4/ : SUCCESS in 2m 27s > > - kolla-publish-centos-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-source/29e3a2b/ : POST_FAILURE in 1h 16m 44s > > - kolla-publish-centos-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-binary/dc12a93/ : POST_FAILURE in 1h 08m 48s (non-voting) > > - kolla-publish-ubuntu-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-source/dd419ce/ : POST_FAILURE in 56m 40s > > - kolla-publish-ubuntu-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-binary/a7fca30/ : POST_FAILURE in 52m 05s (non-voting) > > - kolla-publish-oraclelinux-source http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-source/7e87ef0/ : POST_FAILURE in 1h 35m 59s > > - kolla-publish-oraclelinux-binary http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-binary/56eb5bd/ : POST_FAILURE in 1h 42m 57s (non-voting) This one is differs significantly from the other build failures. Previously, only both ubuntu-related builds failed. Matthias -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From dabarren at gmail.com Mon Feb 19 07:36:00 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 19 Feb 2018 07:36:00 +0000 Subject: [openstack-dev] [kolla]Fwd: [Openstack-stable-maint] Stable check of openstack/kolla failed In-Reply-To: <20180219072749.ez7w63kii3zs7kgs@sofja.berg.ol> References: <5548F4AD-589D-45A4-AE69-DFCEB68B1216@gmx.com> <20180219072749.ez7w63kii3zs7kgs@sofja.berg.ol> Message-ID: Hi, thanks for the advice. Will take a look. Regards On Mon, Feb 19, 2018, 8:28 AM Matthias Runge wrote: > On Sun, Feb 18, 2018 at 05:15:18AM -0600, Sean McGinnis wrote: > > Hello kolla team, > > > > It looks like stable builds for kolla have been failing for some time > now. Just forwarding this on to make sure the team is aware of it before > the need for a stable release comes up. > > > > > Build failed. > > > > > > - build-openstack-sphinx-docs > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/build-openstack-sphinx-docs/b0f5081/html/ > : SUCCESS in 2m 50s > > > - openstack-tox-py27 > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/openstack-tox-py27/b6577a4/ > : SUCCESS in 2m 27s > > > - kolla-publish-centos-source > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-source/29e3a2b/ > : POST_FAILURE in 1h 16m 44s > > > - kolla-publish-centos-binary > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-binary/dc12a93/ > : POST_FAILURE in 1h 08m 48s (non-voting) > > > - kolla-publish-ubuntu-source > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-source/dd419ce/ > : POST_FAILURE in 56m 40s > > > - kolla-publish-ubuntu-binary > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-ubuntu-binary/a7fca30/ > : POST_FAILURE in 52m 05s (non-voting) > > > - kolla-publish-oraclelinux-source > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-source/7e87ef0/ > : POST_FAILURE in 1h 35m 59s > > > - kolla-publish-oraclelinux-binary > http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-oraclelinux-binary/56eb5bd/ > : POST_FAILURE in 1h 42m 57s (non-voting) > > This one is differs significantly from the other build failures. > > Previously, only both ubuntu-related builds failed. > > Matthias > -- > Matthias Runge > > Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Michael Cunningham, > Michael O'Neill, Eric Shander > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrii.shcherbakov at canonical.com Mon Feb 19 09:05:08 2018 From: dmitrii.shcherbakov at canonical.com (Dmitrii Shcherbakov) Date: Mon, 19 Feb 2018 12:05:08 +0300 Subject: [openstack-dev] [charms] In-Reply-To: Message-ID: <04ba9034-9319-4b96-1dde-89b888454b76@canonical.com> Hi Liam, > I was recently looking at how to support custom configuration that relies on post deployment setup. I would describe the problem in general as follows: 1) charms can get context not only from Juju (config options, relation data, leader data), environment (operating system release, OpenStack release, services running etc.) but also from a stateful data store (e.g. a Keystone database); 2) it's not easy to track application state from a charm because: authentication is needed to fetch persistent state, notifications from a data store cannot be reliably set up because charm code is ran periodically and it is not always present in memory (polling is neither timely nor efficient). Another problem is that software that holds the state needs to support data change notifications which raises version compatibility questions. By using actions we move the responsibility for data retrieval and change notifications to an operator but a more generic scenario would be modeling a feedback loop from an application to Juju as a modeling system where changes can be either automatic or gated by an operator (an orchestrator). Making it automatic would mean that a service would get notifications/poll data from a state store and would be authorized to use Juju client to make certain changes. Another problem to solve is maintenance of that state: if we start maintaining a key-value DB in leader settings we need to think about data migration over time and how to access the current state. In other words, in CRUD, the "C" part is relatively straightforward, "R" is more complicated with large data sets (if I have a lot of leader data, how do I interpret it efficiently?), "UD" is less clear - seems like there will have to be 3 or 4 actions per feature for C, [R], U and D or one action that can multiplex commands. This brings me to the question of how is it different from state-specific config values with a complex structure. Instead of leader data, a per-charm config option could hold state data in some format namespaced by a feature name or config file name to render. A data model would be needed to make sure we can create versioned application-specific state buckets (e.g. for upgrades, hold both states, then remove the old one). Application version-specific config values is something not modeled in Juju although custom application versions are present (https://jujucharms.com/docs/2.3/reference-hook-tools#application-version-set). Version information has to be set via a hook tool which means that it has to come from a custom config option anyway. Each charm has its own method to specify an application version and config dependencies are not modeled explicitly - one has to implement that logic in a charm without any Juju API for charms present the way I see it. config('key', 'app-version') - would be something to aim for. Do you have any thoughts about leader data vs a special complex config option per charm and versioning? Thanks! From sgolovat at redhat.com Mon Feb 19 10:04:57 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Mon, 19 Feb 2018 11:04:57 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: Hi, On Sat, Feb 17, 2018 at 10:40 PM, Dan Prince wrote: > Thanks for the update Emilien. A couple of things to add: > > 1) This was really difficult to pin-point via the Heat stack error > message ('list index out of range'). I actually had to go and add > LOG.debug statements to Heat to get to the bottom of it. I aim to sync > with a few of the Heat folks next week on this to see if we can do > better here. YAQL has CLI util that can be used for debugging queries. I found it quite useful. > > 2) I had initially thought it would have been much better to revert > the (breaking) change to python-yaql. That said it was from 2016! So I > think our window of opportunity for the revert is probably way too > large to consider that. Sounds like we need to publish the yaql > package more often in RDO, etc. So your patch to update our queries is > probably our only option. That's true. > > On Fri, Feb 16, 2018 at 8:36 PM, Emilien Macchi wrote: >> Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy >> aggregation. >> >> The commit that broke it is >> https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 >> >> It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 >> But Alex and I figured (after a strong headache) that we needed to update >> the query like this: https://review.openstack.org/545498 >> >> It would be great to avoid this kind of change within minor versions, please >> please. >> >> Happy weekend, >> >> PS: I'm adding YAQL to my linkedin profile right now. > > Be careful here. Do you really want to write YAQL queries all day! > > Dan > >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk From lyarwood at redhat.com Mon Feb 19 10:26:08 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 19 Feb 2018 10:26:08 +0000 Subject: [openstack-dev] [ffu][upgrades] Dublin PTG room and agenda Message-ID: <20180219102608.yn63ja4o6hfchbyg@lyarwood.usersys.redhat.com> Hello all, A very late mail to highlight that there will once again be a 1 day track/room dedicated to talking about Fast-forward upgrades at the upcoming PTG in Dublin. The etherpad for which is listed below: https://etherpad.openstack.org/p/ffu-ptg-rocky Please feel free to add items to the pad, I'd really like to see some concrete action items finally come from these discussions ahead of R. Thanks in advance and see you in Dublin! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From pkovar at redhat.com Mon Feb 19 14:07:42 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 19 Feb 2018 15:07:42 +0100 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens In-Reply-To: References: <2916933d-c5be-9301-f8de-e0d380627c54@debian.org> <20180216154216.06843030ae43a131185b875e@redhat.com> Message-ID: <20180219150742.425e5f66543cb5b4ffbce43d@redhat.com> On Fri, 16 Feb 2018 17:49:37 +0100 Thomas Goirand wrote: > On 02/16/2018 03:42 PM, Petr Kovar wrote: > > On Thu, 15 Feb 2018 09:31:19 +0100 > > Thomas Goirand wrote: > > > >> Hi, > >> > >> Since I'm getting some pressure from other DDs to actively remove Py2 > >> support from my packages, I'm very much considering switching all of the > >> Debian packages for Queens to using exclusively Py3. I would have like > >> to read some opinions about this. Is it a good time for such move? I > >> hope it is, because I'd like to maintain as few Python package with Py2 > >> support at the time of Debian Buster freeze. > >> > >> Also, doing Queens, I've noticed that os-xenapi is still full of py2 > >> only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > >> > >> https://review.openstack.org/544809 > > > > Hey Thomas, slightly off-topic to this, but would it be a good idea to > > resurrect OpenStack install guides for Debian if Debian packages are still > > maintained? > > Yes it would. I'm not sure where to start, since all the doc has moved > to individual projects. Right, I'd probably start with projects listed in minimal deployment: https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment Copying Ubuntu-specific pages might save some time. Old content is still available from https://github.com/openstack/openstack-manuals/tree/a1f1748478125ccd68d90a98ccc06c7ec359d3a0/doc/install-guide/source. Best, pk From rbowen at redhat.com Mon Feb 19 14:12:37 2018 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 19 Feb 2018 09:12:37 -0500 Subject: [openstack-dev] [PTG] Project interviews at the PTG Message-ID: I promise this is the last time I'll bug you about this. (Except on-site, of course!) I still have lots and lots of space for team/project/whatever interviews at the PTG. You can sign up at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 You can see some examples of previous interviews at http://youtube.com/RDOCommunity For the most part, interviews focus on what your team accomplished during the Queens cycle and what you want to work on in Rocky. However, we can also talk about other things like governance, community, related projects, licensing, or anything else that you feel is related to the OpenStack community. I encourage you to talk with your team, and find 2 or 3 people who can speak most eloquently about what you are trying to do, and find a time that works for you. I'll also have the schedules posted on-site, so you can sign up there, if you're still unsure of your schedule. But signing up ahead of time lets me know whether Wednesday is really a vacation day. ;-) See you in Dublin! -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From sathlang at redhat.com Mon Feb 19 14:18:54 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Mon, 19 Feb 2018 15:18:54 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: <87vaetcbap.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Hi, Emilien Macchi writes: > Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy > aggregation. > > The commit that broke it is > https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 > > It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 > But Alex and I figured (after a strong headache) that we needed to update > the query like this: https://review.openstack.org/545498 > This is great, but we still have a pending issue. Mixed upgrade jobs are failing from Pike on. Those are very experimental jobs[1][2] but the error is present. The problem being that in mixed version we have the 1.1.3 yaql version (master undercloud) but not the fix in the templates which are either N-1 or N-3. But if we get the fix in previous version, the deployment shouldn't work anymore as we would not have yaql 1.1.3, but the new syntax. It's not only CI which is affected. Any kind of mixed version operation would fail as well. [1] P->Q: http://logs.openstack.org/62/545762/3/experimental/tripleo-ci-centos-7-scenario001-multinode-oc-upgrade/afc98a5/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-02-19_09_48_54 [2] Fast Forward Upgrade: http://logs.openstack.org/86/525686/55/experimental/tripleo-ci-centos-7-scenario001-multinode-ffu-upgrade/5412555/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz > It would be great to avoid this kind of change within minor versions, > please please. > > Happy weekend, > > PS: I'm adding YAQL to my linkedin profile right now. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sofer Athlan-Guyot From bdobreli at redhat.com Mon Feb 19 14:24:24 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 19 Feb 2018 15:24:24 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: <87vaetcbap.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> References: <87vaetcbap.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: On 2/19/18 3:18 PM, Sofer Athlan-Guyot wrote: > Hi, > > Emilien Macchi writes: > >> Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy >> aggregation. >> >> The commit that broke it is >> https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 >> >> It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 >> But Alex and I figured (after a strong headache) that we needed to update >> the query like this: https://review.openstack.org/545498 >> > > This is great, but we still have a pending issue. Mixed upgrade jobs > are failing from Pike on. Those are very experimental jobs[1][2] but > the error is present. The problem being that in mixed version we have > the 1.1.3 yaql version (master undercloud) but not the fix in the > templates which are either N-1 or N-3. > > But if we get the fix in previous version, the deployment shouldn't work > anymore as we would not have yaql 1.1.3, but the new syntax. With a backport of the YAQL fixes for tht made for Pike, would it be the full fix to make a backport of yaql 1.1.3 for Pike repos as well? Or am I missing something? > > It's not only CI which is affected. Any kind of mixed version operation > would fail as well. > > [1] P->Q: http://logs.openstack.org/62/545762/3/experimental/tripleo-ci-centos-7-scenario001-multinode-oc-upgrade/afc98a5/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-02-19_09_48_54 > [2] Fast Forward Upgrade: http://logs.openstack.org/86/525686/55/experimental/tripleo-ci-centos-7-scenario001-multinode-ffu-upgrade/5412555/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz > >> It would be great to avoid this kind of change within minor versions, >> please please. >> >> Happy weekend, >> >> PS: I'm adding YAQL to my linkedin profile right now. >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Bogdan Dobrelya, Irc #bogdando From lbragstad at gmail.com Mon Feb 19 14:32:14 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 19 Feb 2018 08:32:14 -0600 Subject: [openstack-dev] [keystone] Queens RC review dashboard In-Reply-To: <3380bab1-4a1b-6729-07bd-eaf19f9f4a7d@gmail.com> References: <7b4c6301-790d-6c98-ff7a-15a4d312427f@gmail.com> <3380bab1-4a1b-6729-07bd-eaf19f9f4a7d@gmail.com> Message-ID: Nice, thanks for the reviews. I'll check with the release team if we don't have a stable core approve them by midday. On Sun, Feb 18, 2018 at 6:31 PM, Matt Riedemann wrote: > On 2/1/2018 9:51 AM, Lance Bragstad wrote: > >> Just like with feature freeze, I put together a review dashboard that >> contains patches we need to land in order to cut a release candidate >> [0]. I'll be adding more patches throughout the day, but so far there >> are 21 changes there waiting for review. If there is something I missed, >> please don't hesitate to ping me and I'll get it added. Thanks for all >> the hard work. We're on the home stretch! >> >> [0]https://goo.gl/XVw3wr >> > > I reviewed your open stable/queens changes, left a question in one about > how you want to handle the 'fixes' release note. > > I thought since I'm stable core I could +2 these but I can't, looks like > keystone-stable-maint or the release core team can do that. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Mon Feb 19 14:41:14 2018 From: james.slagle at gmail.com (James Slagle) Date: Mon, 19 Feb 2018 09:41:14 -0500 Subject: [openstack-dev] [TripleO] Deep dive on Ansible Integration Thursday Feb 22 1400UTC Message-ID: As mentioned in the TripleO meeting last week, I volunteered to give a deep dive on the state of TripleO and Ansible integration with config-download. I'll do that this week on Thursday February 22nd at 1400UTC. Anyone can join via bluejeans: https://bluejeans.com/7754237859/ Etherpad: https://bluejeans.com/7754237859/ Optional pre-reading: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ansible_config_download.html The session will be recorded and later uploaded to Youtube at: https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA -- -- James Slagle -- From andrea.frittoli at gmail.com Mon Feb 19 14:46:25 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 19 Feb 2018 14:46:25 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: Dear all, updates: - tempest-full-queens and tempest-full-py3-queens are now available for testing of branchless repositories [0]. They are used for tempest and devstack-gate. If you own a tempest plugin in a branchless repo, you may consider adding similar jobs to your plugin if you use it for tests on stable/queen as well. - if you have migrated jobs based on devstack-tempest please let me know, I'm building reference docs and I'd like to include as many examples as possible - work on multi-node is in progress, but not ready still - you can follow the patches in the multinode branch [1] - updates on some of the points from my previous email are inline below Andrea Frittoli (andreaf) [0] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n73 [1] https://review.openstack.org/#/q/status:open++branch:master+topic:multinode On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli wrote: > Dear all, > > this is the first or a series of ~regular updates on the migration of > Tempest / Grenade jobs to Zuul v3 native. > > The QA team together with the infra team are working on providing the > OpenStack community with a set of base Tempest / Grenade jobs that can be > used as a basis to write new CI jobs / migrate existing legacy ones with a > minimal effort and very little or no Ansible knowledge as a precondition. > > The effort is tracked in an etherpad [0]; I'm trying to keep the > etherpad up to date but it may not always be a source of truth. > > Useful jobs available so far: > - devstack-tempest [0] is a simple tempest/devstack job that runs keystone > glance nova cinder neutron swift and tempest *smoke* filter > - tempest-full [1] is similar but runs a full test run - it replaces the > legacy tempest-dsvm-neutron-full from the integrated gate > - tempest-full-py3 [2] runs a full test run on python3 - it replaces the > legacy tempest-dsvm-py35 > Some more details on this topic: what I did not mention in my previous email is that the autogenerated Tempest / Grenade CI jobs (legacy-* playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To create Zuul V3 Tempest / Grenade native jobs for your projects you need to through away the legacy playbooks and defined new jobs in .zuul.yaml, as documented in the zuul v3 docs [2]. The parent job for a single node Tempest job will usually be devstack-tempest. Example migrated jobs are avilable, for instance: [3] [4]. [2] https://docs.openstack.org/infra/manual/zuulv3.html#howto-update-legacy-jobs [3] http://git.openstack.org/cgit/openstack/sahara-tests/tree/.zuul.yaml#n21 [4] https://review.openstack.org/#/c/543048/5 > > Both tempest-full and tempest-full-py3 are part of integrated-gate > templates, starting from stable/queens on. > The other stable branches still run the legacy jobs, since > devstack ansible changes have not been backported (yet). If we do backport > it will be up to pike maximum. > > Those jobs work in single node mode only at the moment. Enabling multinode > via job configuration only require a new Zuul feature [4][5] that should be > available soon; the new feature allows defining host/group variables in the > job definition, which means setting variables which are specific to one > host or a group of hosts. > Multinode DVR and Ironic jobs will require migration of the ovs-* roles > form devstack-gate to devstack as well. > > Grenade jobs (single and multinode) are still legacy, even if the *legacy* > word has been removed from the name. > They are currently temporarily hosted in the neutron repository. They are > going to be implemented as Zuul v3 native in the grenade repository. > > Roles are documented, and a couple of migration tips for DEVSTACK_GATE > flags is available in the etherpad [0]; more comprehensive examples / > docs will be available as soon as possible. > > Please let me know if you find this update useful and / or if you would > like to see different information in it. > I will send further updates as soon as significant changes / new features > become available. > > Andrea Frittoli (andreaf) > > [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs > [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 > [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 > [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 > [4] https://etherpad.openstack.org/p/zuulv3-group-variables > [5] https://review.openstack.org/#/c/544562/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Mon Feb 19 15:00:59 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 10:00:59 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP Message-ID: Hi list - Apparently Cinder was misled by my deprecations within the oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and in https://review.openstack.org/#/c/522290/ the assumption was made that these should be imported from oslo_db.tests.sqlalchemy. This is an immense mistake on my part that I did not expect people to go looking for the same names elsewhere in private packages and now we have a serious downstream issue as these modules are not packaged, as well as the possibility that the oslo_db.tests. package is now locked in time and I have to add deprecations there also. If anyone knows of projects (or feels like helping me search) that are importing *anything* from oslo_db.tests these must be reverted ASAP. From fungi at yuggoth.org Mon Feb 19 15:03:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Feb 2018 15:03:41 +0000 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> Message-ID: <20180219150341.676l7dxwskwu3uej@yuggoth.org> On 2018-02-18 19:25:07 -0800 (-0800), Emilien Macchi wrote: [...] > My recommendation for TripleO devs: use the old syntax if you want your > code to be tested by RDO Third party CI [...] This is hopefully only a temporary measure? I think I've heard it mentioned that planning is underway to switch that CI system to Zuul v3 (perhaps after 3.0.0 officially releases soon). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From liam.young at canonical.com Mon Feb 19 15:11:13 2018 From: liam.young at canonical.com (Liam Young) Date: Mon, 19 Feb 2018 15:11:13 +0000 Subject: [openstack-dev] [charms] In-Reply-To: <04ba9034-9319-4b96-1dde-89b888454b76@canonical.com> References: <04ba9034-9319-4b96-1dde-89b888454b76@canonical.com> Message-ID: On Mon, Feb 19, 2018 at 9:05 AM, Dmitrii Shcherbakov wrote: > Hi Liam, > >> I was recently looking at how to support custom configuration that relies >> on post deployment setup. > > I would describe the problem in general as follows: > > 1) charms can get context not only from Juju (config options, relation data, > leader data), environment (operating system release, OpenStack release, > services running etc.) but also from a stateful data store (e.g. a Keystone > database); > 2) it's not easy to track application state from a charm because: > authentication is needed to fetch persistent state, notifications from a > data store cannot be reliably set up because charm code is ran periodically > and it is not always present in memory (polling is neither timely nor > efficient). Another problem is that software that holds the state needs to > support data change notifications which raises version compatibility > questions. > > By using actions we move the responsibility for data retrieval and change > notifications to an operator but a more generic scenario would be modeling a > feedback loop from an application to Juju as a modeling system where changes > can be either automatic or gated by an operator (an orchestrator). Making it > automatic would mean that a service would get notifications/poll data from a > state store and would be authorized to use Juju client to make certain > changes. This is an interesting idea, but there is no such mechanism within Juju that I know of. > > Another problem to solve is maintenance of that state: if we start > maintaining a key-value DB in leader settings we need to think about data > migration over time and how to access the current state. Data migration from where to where? We access the current state by retrieving the data from leader db, or am I missing something here? > In other words, in > CRUD, the "C" part is relatively straightforward, "R" is more complicated > with large data sets (if I have a lot of leader data, how do I interpret it > efficiently?), Perhaps I'm being naive but I don't see these developing into data sets large enough to cause performance problems. > "UD" is less clear - seems like there will have to be 3 or 4 > actions per feature for C, [R], U and D or one action that can multiplex > commands. Each time the action is run the context associated with the action is deleted and recreated. If an action argument is unset I guess we could interpret that as leave-unchanged. > > This brings me to the question of how is it different from state-specific > config values with a complex structure. To my mind the difference is complexity for the end user. An action has clearly defined arguments and the charm action code looks after forming this into the correct context. > Instead of leader data, a per-charm > config option could hold state data in some format namespaced by a feature > name or config file name to render. A data model would be needed to make > sure we can create versioned application-specific state buckets (e.g. for > upgrades, hold both states, then remove the old one). > > Application version-specific config values is something not modeled in Juju > although custom application versions are present > (https://jujucharms.com/docs/2.3/reference-hook-tools#application-version-set). > Version information has to be set via a hook tool which means that it has to > come from a custom config option anyway. Each charm has its own method to > specify an application version and config dependencies are not modeled > explicitly - one has to implement that logic in a charm without any Juju API > for charms present the way I see it. > > config('key', 'app-version') - would be something to aim for. > > Do you have any thoughts about leader data vs a special complex config > option per charm and versioning? > > Thanks! Thanks for the feedback Dmitrii From aj at suse.com Mon Feb 19 15:13:04 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Feb 2018 16:13:04 +0100 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: On 2018-02-19 16:00, Michael Bayer wrote: > Hi list - > > Apparently Cinder was misled by my deprecations within the > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > in https://review.openstack.org/#/c/522290/ the assumption was made > that these should be imported from oslo_db.tests.sqlalchemy. This > is an immense mistake on my part that I did not expect people to go > looking for the same names elsewhere in private packages and now we > have a serious downstream issue as these modules are not packaged, as > well as the possibility that the oslo_db.tests. package is now locked > in time and I have to add deprecations there also. > > If anyone knows of projects (or feels like helping me search) that are > importing *anything* from oslo_db.tests these must be reverted ASAP. cinder, glance, ironic,... see http://codesearch.openstack.org/?q=oslo_db.tests&i=nope&files=&repos= ;( Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From doug at doughellmann.com Mon Feb 19 15:15:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Feb 2018 10:15:34 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: <1519053177-sup-2744@lrrr.local> Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: > Hi list - > > Apparently Cinder was misled by my deprecations within the > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > in https://review.openstack.org/#/c/522290/ the assumption was made > that these should be imported from oslo_db.tests.sqlalchemy. This > is an immense mistake on my part that I did not expect people to go > looking for the same names elsewhere in private packages and now we > have a serious downstream issue as these modules are not packaged, as > well as the possibility that the oslo_db.tests. package is now locked > in time and I have to add deprecations there also. > > If anyone knows of projects (or feels like helping me search) that are > importing *anything* from oslo_db.tests these must be reverted ASAP. > If we have modules or classes we don't expect people to be importing directly, we need to prefix the names with _ to comply with the naming conventions we have previously told everyone to look for to recognize private code. I think it's safe to treat "tests" as an exception (after resolving this case) but we should probably document that. Doug From sean.mcginnis at gmx.com Mon Feb 19 15:19:38 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Feb 2018 09:19:38 -0600 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: <20180219151937.GA469@sm-xps> On Mon, Feb 19, 2018 at 04:13:04PM +0100, Andreas Jaeger wrote: > On 2018-02-19 16:00, Michael Bayer wrote: > > Hi list - > > > > Apparently Cinder was misled by my deprecations within the > > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > > in https://review.openstack.org/#/c/522290/ the assumption was made > > that these should be imported from oslo_db.tests.sqlalchemy. This > > is an immense mistake on my part that I did not expect people to go > > looking for the same names elsewhere in private packages and now we > > have a serious downstream issue as these modules are not packaged, as > > well as the possibility that the oslo_db.tests. package is now locked > > in time and I have to add deprecations there also. > > > > If anyone knows of projects (or feels like helping me search) that are > > importing *anything* from oslo_db.tests these must be reverted ASAP. > > cinder, glance, ironic,... see > > http://codesearch.openstack.org/?q=oslo_db.tests&i=nope&files=&repos= > > ;( > > Andreas I don't see any recommendation in the deprecation warning. What should we be using now instead? Sean From andr.kurilin at gmail.com Mon Feb 19 15:39:11 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 19 Feb 2018 17:39:11 +0200 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: Can someone explain me the reason for including "tests" module into packages? 2018-02-19 17:00 GMT+02:00 Michael Bayer : > Hi list - > > Apparently Cinder was misled by my deprecations within the > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > in https://review.openstack.org/#/c/522290/ the assumption was made > that these should be imported from oslo_db.tests.sqlalchemy. This > is an immense mistake on my part that I did not expect people to go > looking for the same names elsewhere in private packages and now we > have a serious downstream issue as these modules are not packaged, as > well as the possibility that the oslo_db.tests. package is now locked > in time and I have to add deprecations there also. > > If anyone knows of projects (or feels like helping me search) that are > importing *anything* from oslo_db.tests these must be reverted ASAP. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Mon Feb 19 15:40:17 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 10:40:17 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: <1519053177-sup-2744@lrrr.local> References: <1519053177-sup-2744@lrrr.local> Message-ID: On Mon, Feb 19, 2018 at 10:15 AM, Doug Hellmann wrote: > Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: >> Hi list - >> >> Apparently Cinder was misled by my deprecations within the >> oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and >> in https://review.openstack.org/#/c/522290/ the assumption was made >> that these should be imported from oslo_db.tests.sqlalchemy. This >> is an immense mistake on my part that I did not expect people to go >> looking for the same names elsewhere in private packages and now we >> have a serious downstream issue as these modules are not packaged, as >> well as the possibility that the oslo_db.tests. package is now locked >> in time and I have to add deprecations there also. >> >> If anyone knows of projects (or feels like helping me search) that are >> importing *anything* from oslo_db.tests these must be reverted ASAP. >> > > If we have modules or classes we don't expect people to be importing > directly, we need to prefix the names with _ to comply with the naming > conventions we have previously told everyone to look for to recognize > private code. doing that now > > I think it's safe to treat "tests" as an exception (after resolving > this case) but we should probably document that. the example of three projects that did this without any of us knowing should illustrate that we really can't make that assumption. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mbayer at redhat.com Mon Feb 19 15:41:08 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 10:41:08 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 10:39 AM, Andrey Kurilin wrote: > Can someone explain me the reason for including "tests" module into > packages? the "tests" module should not be inside packages. Downstream we have CI running Cinder's test suite against packaged dependencies, which fails because we don't package oslo_db.tests. > > > 2018-02-19 17:00 GMT+02:00 Michael Bayer : >> >> Hi list - >> >> Apparently Cinder was misled by my deprecations within the >> oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and >> in https://review.openstack.org/#/c/522290/ the assumption was made >> that these should be imported from oslo_db.tests.sqlalchemy. This >> is an immense mistake on my part that I did not expect people to go >> looking for the same names elsewhere in private packages and now we >> have a serious downstream issue as these modules are not packaged, as >> well as the possibility that the oslo_db.tests. package is now locked >> in time and I have to add deprecations there also. >> >> If anyone knows of projects (or feels like helping me search) that are >> importing *anything* from oslo_db.tests these must be reverted ASAP. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards, > Andrey Kurilin. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Mon Feb 19 15:44:30 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Feb 2018 09:44:30 -0600 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline Message-ID: <20180219154429.GA2110@sm-xps> Hey everyone, Just a quick reminder that Thursday, 22 March, is the deadline for any final Queens release candidates. After this point we will enter a quiet period for a week in preparation of tagging the final Queens release during the PTG week. If you have any patches merged to stable/queens that are critical to be part of the Queens release, please make sure everything has been backported and propose a new RC before the deadline. And just remember, after the official release is complete, that will open up things for doing normal stable releases of Queens. PTLs, after Thursday, please watch for a patch from the release management team tagging the final release. While not required, your ack on that patch would be appreciated. Thanks! Sean From doug at doughellmann.com Mon Feb 19 15:52:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Feb 2018 10:52:48 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: <1519053177-sup-2744@lrrr.local> References: <1519053177-sup-2744@lrrr.local> Message-ID: <1519055532-sup-9818@lrrr.local> Excerpts from Doug Hellmann's message of 2018-02-19 10:15:34 -0500: > Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: > > Hi list - > > > > Apparently Cinder was misled by my deprecations within the > > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > > in https://review.openstack.org/#/c/522290/ the assumption was made > > that these should be imported from oslo_db.tests.sqlalchemy. This > > is an immense mistake on my part that I did not expect people to go > > looking for the same names elsewhere in private packages and now we > > have a serious downstream issue as these modules are not packaged, as > > well as the possibility that the oslo_db.tests. package is now locked > > in time and I have to add deprecations there also. > > > > If anyone knows of projects (or feels like helping me search) that are > > importing *anything* from oslo_db.tests these must be reverted ASAP. > > > > If we have modules or classes we don't expect people to be importing > directly, we need to prefix the names with _ to comply with the naming > conventions we have previously told everyone to look for to recognize > private code. > > I think it's safe to treat "tests" as an exception (after resolving > this case) but we should probably document that. > > Doug Once we resolve the current set of imports, we can land a patch like https://review.openstack.org/545859 to prevent this from happening in the future. Doug From mbayer at redhat.com Mon Feb 19 15:55:52 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 10:55:52 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: <1519055532-sup-9818@lrrr.local> References: <1519053177-sup-2744@lrrr.local> <1519055532-sup-9818@lrrr.local> Message-ID: wow that's heavy-handed. should that be in an oslo utility package of some kind ? On Mon, Feb 19, 2018 at 10:52 AM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-02-19 10:15:34 -0500: >> Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: >> > Hi list - >> > >> > Apparently Cinder was misled by my deprecations within the >> > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and >> > in https://review.openstack.org/#/c/522290/ the assumption was made >> > that these should be imported from oslo_db.tests.sqlalchemy. This >> > is an immense mistake on my part that I did not expect people to go >> > looking for the same names elsewhere in private packages and now we >> > have a serious downstream issue as these modules are not packaged, as >> > well as the possibility that the oslo_db.tests. package is now locked >> > in time and I have to add deprecations there also. >> > >> > If anyone knows of projects (or feels like helping me search) that are >> > importing *anything* from oslo_db.tests these must be reverted ASAP. >> > >> >> If we have modules or classes we don't expect people to be importing >> directly, we need to prefix the names with _ to comply with the naming >> conventions we have previously told everyone to look for to recognize >> private code. >> >> I think it's safe to treat "tests" as an exception (after resolving >> this case) but we should probably document that. >> >> Doug > > Once we resolve the current set of imports, we can land a patch like > https://review.openstack.org/545859 to prevent this from happening in > the future. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Feb 19 15:57:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Feb 2018 10:57:43 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: <1519055743-sup-1034@lrrr.local> IIRC we started doing that so that consumers building their own packages can run the tests for the packages easily. I don't know how many people are doing that, and apparently at least some downstream consumers aren't packaging everything anyway so they couldn't run those tests. Excerpts from Andrey Kurilin's message of 2018-02-19 17:39:11 +0200: > Can someone explain me the reason for including "tests" module into > packages? > > 2018-02-19 17:00 GMT+02:00 Michael Bayer : > > > Hi list - > > > > Apparently Cinder was misled by my deprecations within the > > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > > in https://review.openstack.org/#/c/522290/ the assumption was made > > that these should be imported from oslo_db.tests.sqlalchemy. This > > is an immense mistake on my part that I did not expect people to go > > looking for the same names elsewhere in private packages and now we > > have a serious downstream issue as these modules are not packaged, as > > well as the possibility that the oslo_db.tests. package is now locked > > in time and I have to add deprecations there also. > > > > If anyone knows of projects (or feels like helping me search) that are > > importing *anything* from oslo_db.tests these must be reverted ASAP. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From andr.kurilin at gmail.com Mon Feb 19 15:57:47 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 19 Feb 2018 17:57:47 +0200 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: As for downstream you can do whatever you want, but it looks like this issue should be solved in upstream. I mean if "tests" directory is located at the top level of the repo, no one will use it. Also, setuptools supports `exclude` option which should solve the issue as well. 2018-02-19 17:41 GMT+02:00 Michael Bayer : > On Mon, Feb 19, 2018 at 10:39 AM, Andrey Kurilin > wrote: > > Can someone explain me the reason for including "tests" module into > > packages? > > the "tests" module should not be inside packages. Downstream we have > CI running Cinder's test suite against packaged dependencies, which > fails because we don't package oslo_db.tests. > > > > > > > > 2018-02-19 17:00 GMT+02:00 Michael Bayer : > >> > >> Hi list - > >> > >> Apparently Cinder was misled by my deprecations within the > >> oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > >> in https://review.openstack.org/#/c/522290/ the assumption was made > >> that these should be imported from oslo_db.tests.sqlalchemy. This > >> is an immense mistake on my part that I did not expect people to go > >> looking for the same names elsewhere in private packages and now we > >> have a serious downstream issue as these modules are not packaged, as > >> well as the possibility that the oslo_db.tests. package is now locked > >> in time and I have to add deprecations there also. > >> > >> If anyone knows of projects (or feels like helping me search) that are > >> importing *anything* from oslo_db.tests these must be reverted ASAP. > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > Best regards, > > Andrey Kurilin. > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Mon Feb 19 16:22:15 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 19 Feb 2018 18:22:15 +0200 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: <1519055743-sup-1034@lrrr.local> References: <1519055743-sup-1034@lrrr.local> Message-ID: Imo, it creates more problems than profits. If someone wants to change the code and run tests -> use git repositories. Prepared python package is not about this. 2018-02-19 17:57 GMT+02:00 Doug Hellmann : > IIRC we started doing that so that consumers building their own packages > can run the tests for the packages easily. I don't know how many people > are doing that, and apparently at least some downstream consumers aren't > packaging everything anyway so they couldn't run those tests. > > Excerpts from Andrey Kurilin's message of 2018-02-19 17:39:11 +0200: > > Can someone explain me the reason for including "tests" module into > > packages? > > > > 2018-02-19 17:00 GMT+02:00 Michael Bayer : > > > > > Hi list - > > > > > > Apparently Cinder was misled by my deprecations within the > > > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > > > in https://review.openstack.org/#/c/522290/ the assumption was made > > > that these should be imported from oslo_db.tests.sqlalchemy. This > > > is an immense mistake on my part that I did not expect people to go > > > looking for the same names elsewhere in private packages and now we > > > have a serious downstream issue as these modules are not packaged, as > > > well as the possibility that the oslo_db.tests. package is now locked > > > in time and I have to add deprecations there also. > > > > > > If anyone knows of projects (or feels like helping me search) that are > > > importing *anything* from oslo_db.tests these must be reverted ASAP. > > > > > > ____________________________________________________________ > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Mon Feb 19 16:23:25 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 11:23:25 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 10:57 AM, Andrey Kurilin wrote: > As for downstream you can do whatever you want, but it looks like this issue > should be solved in upstream. I mean if "tests" directory is located at the > top level of the repo, no one will use it. again, the search at http://codesearch.openstack.org/?q=oslo_db.tests&i=nope&files=&repos= shows four downstream projects using it. I am now submitting gerrits for all four and also getting internal downstream patches to fix internally. this is as bad as it gets. > Also, setuptools supports `exclude` option which should solve the issue as > well. > > > > 2018-02-19 17:41 GMT+02:00 Michael Bayer : >> >> On Mon, Feb 19, 2018 at 10:39 AM, Andrey Kurilin >> wrote: >> > Can someone explain me the reason for including "tests" module into >> > packages? >> >> the "tests" module should not be inside packages. Downstream we have >> CI running Cinder's test suite against packaged dependencies, which >> fails because we don't package oslo_db.tests. >> >> >> > >> > >> > 2018-02-19 17:00 GMT+02:00 Michael Bayer : >> >> >> >> Hi list - >> >> >> >> Apparently Cinder was misled by my deprecations within the >> >> oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and >> >> in https://review.openstack.org/#/c/522290/ the assumption was made >> >> that these should be imported from oslo_db.tests.sqlalchemy. This >> >> is an immense mistake on my part that I did not expect people to go >> >> looking for the same names elsewhere in private packages and now we >> >> have a serious downstream issue as these modules are not packaged, as >> >> well as the possibility that the oslo_db.tests. package is now locked >> >> in time and I have to add deprecations there also. >> >> >> >> If anyone knows of projects (or feels like helping me search) that are >> >> importing *anything* from oslo_db.tests these must be reverted ASAP. >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > Best regards, >> > Andrey Kurilin. >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards, > Andrey Kurilin. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Feb 19 16:32:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Feb 2018 11:32:43 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: <1519053177-sup-2744@lrrr.local> <1519055532-sup-9818@lrrr.local> Message-ID: <1519057901-sup-5929@lrrr.local> Excerpts from Michael Bayer's message of 2018-02-19 10:55:52 -0500: > wow that's heavy-handed. should that be in an oslo utility package of > some kind ? I thought about that, but figured we should wait and see whether we actually want to take the approach before polishing it. If we do we can add a function in oslo.utils. > > On Mon, Feb 19, 2018 at 10:52 AM, Doug Hellmann wrote: > > Excerpts from Doug Hellmann's message of 2018-02-19 10:15:34 -0500: > >> Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: > >> > Hi list - > >> > > >> > Apparently Cinder was misled by my deprecations within the > >> > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > >> > in https://review.openstack.org/#/c/522290/ the assumption was made > >> > that these should be imported from oslo_db.tests.sqlalchemy. This > >> > is an immense mistake on my part that I did not expect people to go > >> > looking for the same names elsewhere in private packages and now we > >> > have a serious downstream issue as these modules are not packaged, as > >> > well as the possibility that the oslo_db.tests. package is now locked > >> > in time and I have to add deprecations there also. > >> > > >> > If anyone knows of projects (or feels like helping me search) that are > >> > importing *anything* from oslo_db.tests these must be reverted ASAP. > >> > > >> > >> If we have modules or classes we don't expect people to be importing > >> directly, we need to prefix the names with _ to comply with the naming > >> conventions we have previously told everyone to look for to recognize > >> private code. > >> > >> I think it's safe to treat "tests" as an exception (after resolving > >> this case) but we should probably document that. > >> > >> Doug > > > > Once we resolve the current set of imports, we can land a patch like > > https://review.openstack.org/545859 to prevent this from happening in > > the future. > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Mon Feb 19 16:37:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 19 Feb 2018 08:37:01 -0800 Subject: [openstack-dev] [tripleo] Draft schedule for PTG Message-ID: Alex and I have been working on the agenda for next week, based on what people proposed in topics. The draft calendar is visible here: https://calendar.google.com/calendar/embed?src=tgpb5tv12mlu7kge5oqertje78%40group.calendar.google.com&ctz=Europe%2FDublin Also you can import the ICS from: https://calendar.google.com/calendar/ical/tgpb5tv12mlu7kge5oqertje78%40group.calendar.google.com/public/basic.ics Note this is a draft - we would love your feedback about the proposal. Some sessions might be too short or too long? You to tell us. (Please look at event details for descriptions). Also, for each session we need a "driver", please tell us if you volunteer to do it. Please let us know here and we'll make adjustment, we have plenty of room for it. Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Mon Feb 19 16:50:21 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 19 Feb 2018 11:50:21 -0500 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline In-Reply-To: <20180219154429.GA2110@sm-xps> References: <20180219154429.GA2110@sm-xps> Message-ID: On Mon, Feb 19, 2018 at 10:44 AM, Sean McGinnis wrote: > Hey everyone, > > Just a quick reminder that Thursday, 22 March, is the deadline for any final > Queens release candidates. After this point we will enter a quiet period for a > week in preparation of tagging the final Queens release during the PTG week. February, right ? David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From dmsimard at redhat.com Mon Feb 19 16:50:47 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 19 Feb 2018 11:50:47 -0500 Subject: [openstack-dev] [TripleO] Deep dive on Ansible Integration Thursday Feb 22 1400UTC In-Reply-To: References: Message-ID: Thanks for recording it! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Feb 19, 2018 at 9:41 AM, James Slagle wrote: > As mentioned in the TripleO meeting last week, I volunteered to give a > deep dive on the state of TripleO and Ansible integration with > config-download. > > I'll do that this week on Thursday February 22nd at 1400UTC. > > Anyone can join via bluejeans: https://bluejeans.com/7754237859/ > Etherpad: https://bluejeans.com/7754237859/ > Optional pre-reading: > https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/ansible_config_download.html > > The session will be recorded and later uploaded to Youtube at: > https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Mon Feb 19 16:53:07 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Feb 2018 10:53:07 -0600 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline In-Reply-To: References: <20180219154429.GA2110@sm-xps> Message-ID: <20180219165306.GA5891@sm-xps> On Mon, Feb 19, 2018 at 11:50:21AM -0500, David Moreau Simard wrote: > On Mon, Feb 19, 2018 at 10:44 AM, Sean McGinnis wrote: > > Hey everyone, > > > > Just a quick reminder that Thursday, 22 March, is the deadline for any final > > Queens release candidates. After this point we will enter a quiet period for a > > week in preparation of tagging the final Queens release during the PTG week. > > February, right ? > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] Whoops!!! Yes, sorry. I am getting way ahead of myself I guess. The final Queens RC deadline is Thursday, 22 FEBRUARY. Sorry about any confusion. Thanks David for correcting that. From tiswanso at cisco.com Mon Feb 19 16:57:44 2018 From: tiswanso at cisco.com (Timothy Swanson (tiswanso)) Date: Mon, 19 Feb 2018 16:57:44 +0000 Subject: [openstack-dev] [magnum] Example bringup of Istio on Magnum k8s + Octavia Message-ID: <517ED6E5-27B5-4D29-9D80-3B8A5E96463B@cisco.com> In case anyone is interested in the details, I went through the exercise of a basic bringup of Istio on Magnum k8s (with stable/pike): https://tiswanso.github.io/istio/istio_on_magnum.html I hope to update with follow-on items that may also be explored, such as: - Istio automatic side-car injection via adding the k8s admission controller during cluster create - Add Raw VM app to istio service-mesh Big thanks to Spyros (strigazi) for helping me through some magnum bringup snags. —Tim Swanson (tiswanso) From mbayer at redhat.com Mon Feb 19 17:05:33 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 19 Feb 2018 12:05:33 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: <1519057901-sup-5929@lrrr.local> References: <1519053177-sup-2744@lrrr.local> <1519055532-sup-9818@lrrr.local> <1519057901-sup-5929@lrrr.local> Message-ID: Summarizing all the reviews: Doug's proposed check in oslo_db.tests: https://review.openstack.org/#/c/545859/ Mark oslo_db internal fixtures private: https://review.openstack.org/545862 Cinder: https://review.openstack.org/545860 Neutron: https://review.openstack.org/545868 Ironic: https://review.openstack.org/545874 Glance: https://review.openstack.org/545878 Glare: https://review.openstack.org/545883 On Mon, Feb 19, 2018 at 11:32 AM, Doug Hellmann wrote: > Excerpts from Michael Bayer's message of 2018-02-19 10:55:52 -0500: >> wow that's heavy-handed. should that be in an oslo utility package of >> some kind ? > > I thought about that, but figured we should wait and see whether we > actually want to take the approach before polishing it. If we do we can > add a function in oslo.utils. > >> >> On Mon, Feb 19, 2018 at 10:52 AM, Doug Hellmann wrote: >> > Excerpts from Doug Hellmann's message of 2018-02-19 10:15:34 -0500: >> >> Excerpts from Michael Bayer's message of 2018-02-19 10:00:59 -0500: >> >> > Hi list - >> >> > >> >> > Apparently Cinder was misled by my deprecations within the >> >> > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and >> >> > in https://review.openstack.org/#/c/522290/ the assumption was made >> >> > that these should be imported from oslo_db.tests.sqlalchemy. This >> >> > is an immense mistake on my part that I did not expect people to go >> >> > looking for the same names elsewhere in private packages and now we >> >> > have a serious downstream issue as these modules are not packaged, as >> >> > well as the possibility that the oslo_db.tests. package is now locked >> >> > in time and I have to add deprecations there also. >> >> > >> >> > If anyone knows of projects (or feels like helping me search) that are >> >> > importing *anything* from oslo_db.tests these must be reverted ASAP. >> >> > >> >> >> >> If we have modules or classes we don't expect people to be importing >> >> directly, we need to prefix the names with _ to comply with the naming >> >> conventions we have previously told everyone to look for to recognize >> >> private code. >> >> >> >> I think it's safe to treat "tests" as an exception (after resolving >> >> this case) but we should probably document that. >> >> >> >> Doug >> > >> > Once we resolve the current set of imports, we can land a patch like >> > https://review.openstack.org/545859 to prevent this from happening in >> > the future. >> > >> > Doug >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amoralej at redhat.com Mon Feb 19 17:10:56 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Mon, 19 Feb 2018 18:10:56 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: <20180218003536.GY23143@thor.bakeyournoodle.com> References: <20180218003536.GY23143@thor.bakeyournoodle.com> Message-ID: On Sun, Feb 18, 2018 at 1:35 AM, Tony Breeds wrote: > On Sat, Feb 17, 2018 at 04:40:12PM -0500, Dan Prince wrote: > > Thanks for the update Emilien. A couple of things to add: > > > > 1) This was really difficult to pin-point via the Heat stack error > > message ('list index out of range'). I actually had to go and add > > LOG.debug statements to Heat to get to the bottom of it. I aim to sync > > with a few of the Heat folks next week on this to see if we can do > > better here. > > > > 2) I had initially thought it would have been much better to revert > > the (breaking) change to python-yaql. That said it was from 2016! So I > > think our window of opportunity for the revert is probably way too > > large to consider that. Sounds like we need to publish the yaql > > package more often in RDO, etc. So your patch to update our queries is > > probably our only option. > > I'm keen to sit down at the PTG for a quick discussion on how the > requirements team can better support RDO (and therefore tripleo) to > test these OpenStack deliverables sooner. > > As you point out the commit was from Aug 2016, which was released in Mar > 2017. It was added to upper-constraints almost immediately and > global-requirements (as the minimum supported version) in Aug 2017. > > However, it was only recently added to RDO. > > So if there is anything we can do on the requirements team to signal > these changes that isn't being done already lets work out what it is :) > > Recently, we have added a job in post pipeline for openstack/requirements in https://review.rdoproject.org to automatically post updates in RDO dependencies repo when changes are detected in upper-constraints. This job will try to automatically update the dependencies when possible or notify to take required manual actions in some cases. I expect this will improve dependencies management in RDO in next releases. > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Feb 19 17:31:39 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 19 Feb 2018 11:31:39 -0600 Subject: [openstack-dev] User Committee Elections Message-ID: Hi everyone, We had to push the voting back a week if you have been keeping up with the UC elections[0]. That being said, election officials have sent out the poll and so voting is now open! Be sure to check out the candidates - https://goo.gl/x183he - and get your vote in before the poll closes. [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrii.shcherbakov at canonical.com Mon Feb 19 18:04:33 2018 From: dmitrii.shcherbakov at canonical.com (Dmitrii Shcherbakov) Date: Mon, 19 Feb 2018 21:04:33 +0300 Subject: [openstack-dev] [charms] In-Reply-To: References: <04ba9034-9319-4b96-1dde-89b888454b76@canonical.com> Message-ID: <403aff84-dbfb-5344-c77a-33f6d1ebd634@canonical.com> > Data migration from where to where? We access the current state by retrieving the data from leader db, or am I missing something here? In case there are changes in how data is stored in one version of a charm vs the other. Another problem is application versioning: we do have version-specific templates but this data may be versioned too. Old entries may not be simple strings, e.g. they can be small objects which can change following version changes (new data added or removed in a pre-defined way). I can also see potential scenarios where you would need to gracefully retire old data as features get deprecated and, eventually, removed. So, during an upgrade you would have two copies of stateful data and a charm would react differently depending on the current application version set. After an upgrade the old copy could be automatically discarded. > Perhaps I'm being naive but I don't see these developing into data sets large enough to cause performance problems. I don't think it's going to be used for large data sets either but you never know. > Each time the action is run the context associated with the action is deleted and recreated. If an action argument is unset I guess we could interpret that as leave-unchanged. Leave unchanged - yes. Still need to be able to delete completely though. What I like about actions is that you can clearly express imperative steps with arguments that you have to perform after a deployment and they have a very specific type of data in mind which is fetched from stateful applications out of band by an operator. On 19.02.2018 18:11, Liam Young wrote: > On Mon, Feb 19, 2018 at 9:05 AM, Dmitrii Shcherbakov > wrote: >> Hi Liam, >> >>> I was recently looking at how to support custom configuration that relies >>> on post deployment setup. >> I would describe the problem in general as follows: >> >> 1) charms can get context not only from Juju (config options, relation data, >> leader data), environment (operating system release, OpenStack release, >> services running etc.) but also from a stateful data store (e.g. a Keystone >> database); >> 2) it's not easy to track application state from a charm because: >> authentication is needed to fetch persistent state, notifications from a >> data store cannot be reliably set up because charm code is ran periodically >> and it is not always present in memory (polling is neither timely nor >> efficient). Another problem is that software that holds the state needs to >> support data change notifications which raises version compatibility >> questions. >> >> By using actions we move the responsibility for data retrieval and change >> notifications to an operator but a more generic scenario would be modeling a >> feedback loop from an application to Juju as a modeling system where changes >> can be either automatic or gated by an operator (an orchestrator). Making it >> automatic would mean that a service would get notifications/poll data from a >> state store and would be authorized to use Juju client to make certain >> changes. > This is an interesting idea, but there is no such mechanism within > Juju that I know of. > >> Another problem to solve is maintenance of that state: if we start >> maintaining a key-value DB in leader settings we need to think about data >> migration over time and how to access the current state. > Data migration from where to where? We access the current state by retrieving > the data from leader db, or am I missing something here? > >> In other words, in >> CRUD, the "C" part is relatively straightforward, "R" is more complicated >> with large data sets (if I have a lot of leader data, how do I interpret it >> efficiently?), > Perhaps I'm being naive but I don't see these developing into data > sets large enough > to cause performance problems. > >> "UD" is less clear - seems like there will have to be 3 or 4 >> actions per feature for C, [R], U and D or one action that can multiplex >> commands. > Each time the action is run the context associated with the action is deleted > and recreated. If an action argument is unset I guess we could interpret that as > leave-unchanged. > >> This brings me to the question of how is it different from state-specific >> config values with a complex structure. > To my mind the difference is complexity for the end user. An action has clearly > defined arguments and the charm action code looks after forming this into > the correct context. > >> Instead of leader data, a per-charm >> config option could hold state data in some format namespaced by a feature >> name or config file name to render. A data model would be needed to make >> sure we can create versioned application-specific state buckets (e.g. for >> upgrades, hold both states, then remove the old one). >> >> Application version-specific config values is something not modeled in Juju >> although custom application versions are present >> (https://jujucharms.com/docs/2.3/reference-hook-tools#application-version-set). >> Version information has to be set via a hook tool which means that it has to >> come from a custom config option anyway. Each charm has its own method to >> specify an application version and config dependencies are not modeled >> explicitly - one has to implement that logic in a charm without any Juju API >> for charms present the way I see it. >> >> config('key', 'app-version') - would be something to aim for. >> >> Do you have any thoughts about leader data vs a special complex config >> option per charm and versioning? >> >> Thanks! > Thanks for the feedback Dmitrii From Arkady.Kanevsky at dell.com Mon Feb 19 18:38:13 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 19 Feb 2018 18:38:13 +0000 Subject: [openstack-dev] [Openstack-operators] User Committee Elections In-Reply-To: References: Message-ID: I saw election email with the pointer to votes. See no reason for stopping it now. But extending vote for 1 more week makes sense. Thanks, Arkady From: Melvin Hillsman [mailto:mrhillsman at gmail.com] Sent: Monday, February 19, 2018 11:32 AM To: user-committee ; OpenStack Mailing List ; OpenStack Operators ; OpenStack Dev ; community at lists.openstack.org Subject: [Openstack-operators] User Committee Elections Hi everyone, We had to push the voting back a week if you have been keeping up with the UC elections[0]. That being said, election officials have sent out the poll and so voting is now open! Be sure to check out the candidates - https://goo.gl/x183he - and get your vote in before the poll closes. [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janzian at us.ibm.com Mon Feb 19 19:33:42 2018 From: janzian at us.ibm.com (James Anziano) Date: Mon, 19 Feb 2018 19:33:42 +0000 Subject: [openstack-dev] [neutron] Bug Deputy Report Message-ID: An HTML attachment was scrubbed... URL: From mike at openstack.org Mon Feb 19 20:15:25 2018 From: mike at openstack.org (Mike Perez) Date: Mon, 19 Feb 2018 12:15:25 -0800 Subject: [openstack-dev] [all] Thanks Bot - For a happier open source community, give recognition Message-ID: <20180219201525.GA22198@openstack.org> Every open source community is made up of real people with real feelings. Many open source contributors are working in their free time to provide essential software that we use daily. Sometimes praise is lost in the feedback of bugs or missing features. Focusing on too much negative feedback can lead contributors to frustration and burnout. However you end up contributing to OpenStack, or any open source project, I believe that what gets people excited about working with a community is some form of recognition. My first answer to people coming into the OpenStack community is to join our Project Team Gathering event. Significant changes are discussed here to understand the technical details to carry out the work in the new release. You should seek out people who are owners of these changes and volunteer to work on a portion of the work. Not only are these people interested in your success by having you take on some of the work they have invested in, but you will be doing work that interests the entire team. You’ll finish the improvements and be known as the person in the project with the expertise in that area. You’ll receive some recognition from the team and the community using your software. And just like that, you’re hooked because you know your work is making a difference. Maybe you’ll improve that area of the project more, venture onto other parts of the project, or even expand to other open source projects. If you work in the OpenStack community, there’s also another way you can give and get recognition. In OpenStack IRC channels, you can thank members of the community publicly with the following command: #thanks for being a swell person in that heated discussion! To be clear, is replaced with the person you want to give thanks. Where does this information go? Just like the Success Bot in which we can share successes as a community, Thanks Bot will post them to the OpenStack wiki. They will also be featured in the OpenStack Developer Digest. https://wiki.openstack.org/wiki/Thanks In developing this feature, I’ve had help and feedback from various members of the community. You can see my history of thanking people along the way, too. At the next OpenStack event, you’re still welcome to buy a tasty beverage for someone to say thanks. But why not give them recognition now too and let them know how much they’re appreciated in the community? -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From miguel at mlavalle.com Mon Feb 19 20:52:58 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 19 Feb 2018 14:52:58 -0600 Subject: [openstack-dev] [neutron] Bug Deputy Report - resend Message-ID: There is one critical bug from this week: https://bugs.launchpad.net/neutron/+bug/1749667 - neutron doesn't correctly handle unknown protocols and should whitelist known and handled protocols There is a fix in progress for this already, thanks Brian for picking this up. There are three bugs that still need further attention: https://bugs.launchpad.net/neutron/+bug/1748658 - Restarting Neutron containers which make use of network namespaces doesn't work This has a fix for the tripleo side but I believe it still needs attention from neutron https://bugs.launchpad.net/neutron/+bug/1749425 - Neutron integrated with OpenVSwitch drops packets and fails to plug/unplug interfaces from OVS on router interfaces at scale There is some discussion in the comments of this bug in regard to easing reproduction, should help confirmation when as that effort continues. https://bugs.launchpad.net/neutron/+bug/1749982 - After l3-agent-router-add inactive router gets all traffic I didn't get much time with this one but couldn't manage to reproduce it. However I'm not too familiar with this area, so I'm hoping someone else can give a more definitive answer. There is also one bug marked as incomplete: https://bugs.launchpad.net/neutron/+bug/1748894 - intermittent dhcp failures This could use some eyes as well to get confirmed. Thanks everyone, - James Anziano -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Mon Feb 19 21:23:18 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 19 Feb 2018 21:23:18 -0000 Subject: [openstack-dev] [neutron] neutron-dynamic-routing 12.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for neutron-dynamic-routing for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron-dynamic-routing/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron-dynamic-routing/log/?h=stable/queens Release notes for neutron-dynamic-routing can be found at: http://docs.openstack.org/releasenotes/neutron-dynamic-routing/ From no-reply at openstack.org Mon Feb 19 21:25:44 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 19 Feb 2018 21:25:44 -0000 Subject: [openstack-dev] [neutron] neutron 12.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for neutron for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/neutron/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/neutron/log/?h=stable/queens Release notes for neutron can be found at: http://docs.openstack.org/releasenotes/neutron/ From no-reply at openstack.org Mon Feb 19 21:27:08 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 19 Feb 2018 21:27:08 -0000 Subject: [openstack-dev] [neutron] networking-ovn 4.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for networking-ovn for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-ovn/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-ovn/log/?h=stable/queens Release notes for networking-ovn can be found at: http://docs.openstack.org/releasenotes/networking-ovn/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/networking-ovn and tag it *queens-rc-potential* to bring it to the networking-ovn release crew's attention. From no-reply at openstack.org Mon Feb 19 21:32:08 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 19 Feb 2018 21:32:08 -0000 Subject: [openstack-dev] [neutron] networking-midonet 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for networking-midonet for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-midonet/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-midonet/log/?h=stable/queens Release notes for networking-midonet can be found at: http://docs.openstack.org/releasenotes/networking-midonet/ From melwittt at gmail.com Mon Feb 19 21:35:38 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 19 Feb 2018 13:35:38 -0800 Subject: [openstack-dev] [nova] review priorities etherpad for rocky Message-ID: Howdy Stackers, I’ve set up the review priorities etherpad [1] for us to use during the Rocky cycle. It’s the same format we had back in Pike with the exception of a couple of new sections I’ve added: "Non-priority approved blueprints” and "Co-authors wanted”. The idea of the etherpad is to organize a subset of proposed patches to focus review and make it easy to find “what can I review today?” The etherpad is organized by subteam/topic, so reviewers can easily find the latest and greatest patches for each area. As a subteam/topic member, please add links to your patches accordingly. The "Non-priority approved blueprints” is a new section I’d like to try out to create some visibility for non-priority work that is ready for review. I’m thinking if we have a section for it, it will be easier to keep reviews for those blueprints in our rotation. If the section gets too large, we can create a new etherpad for that and link to it in the section. The other new section is called "Co-authors wanted”. Here I’d like to provide a place where authors can link patches they’d like to collaborate on with one or more other co-authors. The common scenario I think about is: an author has researched a bug and is able to propose a patch, but doesn’t have the time or the knowhow to provide test coverage in the patch. They could add the patch to the “Co-authors wanted” section and if another author is interested, they could join the patch, add the test coverage, and add themselves as co-author. By doing this, I hope to make it easier for co-authors to work on patches together. Let me know if you have any questions about the etherpad. Thanks, -melanie [1] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking From zbitter at redhat.com Mon Feb 19 22:25:05 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 19 Feb 2018 17:25:05 -0500 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: Message-ID: <78bc049f-ca50-c2f4-6d1c-83d106716cdb@redhat.com> On 17/02/18 16:40, Dan Prince wrote: > Thanks for the update Emilien. A couple of things to add: > > 1) This was really difficult to pin-point via the Heat stack error > message ('list index out of range'). I actually had to go and add > LOG.debug statements to Heat to get to the bottom of it. I aim to sync > with a few of the Heat folks next week on this to see if we can do > better here. The message itself is pretty much all we get from yaql, even in its own interpreter: (py27) cat yaql_data.json {"data": [{"foo": "bar"}]} (py27) yaql -d yaql_data.json Yet Another Query Language - command-line query tool Version 1.1.3 Copyright (c) 2013-2017 Mirantis, Inc yaql> dict($.data.where($ != null).flatten().selectMany($.items()).groupBy($[0], $[1], $.flatten())) { "foo": [ "bar" ] } yaql> dict($.data.where($ != null).flatten().selectMany($.items()).groupBy($[0], $[1], [$[0], $[1].flatten()])) Execution exception: list index out of range yaql> (Note that different lengths of data will give you different errors though.) The big issue here though is that for failures in validation we report the path in the template to the function that failed, but we don't do the same for failures in actually resolving the function at runtime. A comprehensive fix is challenging without breaking what is supposed to be a stable third-party plugin API, but it might be possible. Was that the information you needed to debug this? We do report which resource failed, but for something with a huge definition like allNodesConfig I can see why that might not help as much as you'd hope. > 2) I had initially thought it would have been much better to revert > the (breaking) change to python-yaql. That said it was from 2016! So I > think our window of opportunity for the revert is probably way too > large to consider that. Sounds like we need to publish the yaql > package more often in RDO, etc. So your patch to update our queries is > probably our only option. I _think_ this should be OK for upgrades, as long as you never do a stack update using the existing (Pike) templates after upgrading the undercloud to Queens, but... sadface. I think we need to either merge Thomas's patch that gets rid of this function altogether (https://review.openstack.org/#/c/545856/) and backport it to older versions of t-h-t, or make yaql itself backward-compatible by doing something like https://review.openstack.org/#/c/545996/ cheers, Zane. > On Fri, Feb 16, 2018 at 8:36 PM, Emilien Macchi wrote: >> Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy >> aggregation. >> >> The commit that broke it is >> https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 >> >> It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 >> But Alex and I figured (after a strong headache) that we needed to update >> the query like this: https://review.openstack.org/545498 >> >> It would be great to avoid this kind of change within minor versions, please >> please. >> >> Happy weekend, >> >> PS: I'm adding YAQL to my linkedin profile right now. > > Be careful here. Do you really want to write YAQL queries all day! > > Dan > >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Mon Feb 19 23:24:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 20 Feb 2018 10:24:20 +1100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: <20180218003536.GY23143@thor.bakeyournoodle.com> Message-ID: <20180219232420.GB23143@thor.bakeyournoodle.com> On Mon, Feb 19, 2018 at 06:10:56PM +0100, Alfredo Moralejo Alonso wrote: > Recently, we have added a job in post pipeline for openstack/requirements > in https://review.rdoproject.org to > automatically post updates in RDO dependencies repo when changes are > detected in upper-constraints. This > job will try to automatically update the dependencies when possible or > notify to take required manual actions > in some cases. > > I expect this will improve dependencies management in RDO in next releases. That's cool. Can you point me at how that's done? I'm not sure how you'd automate the builds but that's probably just lack of imagination on my part ;P Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Feb 19 23:42:27 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 20 Feb 2018 10:42:27 +1100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: <87vaetcbap.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <20180219234227.GC23143@thor.bakeyournoodle.com> On Mon, Feb 19, 2018 at 03:24:24PM +0100, Bogdan Dobrelya wrote: > With a backport of the YAQL fixes for tht made for Pike, would it be the > full fix to make a backport of yaql 1.1.3 for Pike repos as well? Or am I > missing something? At some level that should be fine. In the broader OpenSteck perspective we've been gating with yaql 1.1.3 from pypi for releases > newton. So while updating the yaql package on pike to 1.1.3 isn't guaranteed to be safe it should be fine. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Feb 19 23:57:55 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 20 Feb 2018 10:57:55 +1100 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: <20180219235754.GD23143@thor.bakeyournoodle.com> On Mon, Feb 19, 2018 at 10:00:59AM -0500, Michael Bayer wrote: > Hi list - > > Apparently Cinder was misled by my deprecations within the > oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase, and > in https://review.openstack.org/#/c/522290/ the assumption was made > that these should be imported from oslo_db.tests.sqlalchemy. This > is an immense mistake on my part that I did not expect people to go > looking for the same names elsewhere in private packages and now we > have a serious downstream issue as these modules are not packaged, as > well as the possibility that the oslo_db.tests. package is now locked > in time and I have to add deprecations there also. > > If anyone knows of projects (or feels like helping me search) that are > importing *anything* from oslo_db.tests these must be reverted ASAP. I get: [tony at thor openstack]$ grep -Erin '((from|import) oslo_db.tests|from oslo_db import tests)' */*/* openstack/cinder/cinder/tests/unit/db/test_migrations.py:29:from oslo_db.tests.sqlalchemy import base as test_base openstack/glance/glance/tests/functional/db/test_migrations.py:23:from oslo_db.tests.sqlalchemy import base as test_base openstack/glare/glare/tests/unit/db/migrations/test_migrations.py:35:from oslo_db.tests.sqlalchemy import base as test_base openstack/ironic/build/lib/ironic/tests/unit/db/sqlalchemy/test_migrations.py:47:from oslo_db.tests.sqlalchemy import base as test_base openstack/ironic/ironic/tests/unit/db/sqlalchemy/test_migrations.py:47:from oslo_db.tests.sqlalchemy import base as test_base openstack/neutron/neutron/tests/unit/db/test_sqlalchemytypes.py:19:from oslo_db.tests.sqlalchemy import base as test_base + bunch of oslo_db hits but I guess they're un interesting ;P I last updated my local clones yesterday so it shouldn't be too far from the current state. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joshua.hesketh at gmail.com Tue Feb 20 00:47:38 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 20 Feb 2018 11:47:38 +1100 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <20180219150341.676l7dxwskwu3uej@yuggoth.org> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> <20180219150341.676l7dxwskwu3uej@yuggoth.org> Message-ID: Perhaps we need to consider a backport of the syntax to the 2.5 series? It could help with the transition for those who need to upgrade. However, on the other hand it might make deployers more complacent to do so. On Tue, Feb 20, 2018 at 2:03 AM, Jeremy Stanley wrote: > On 2018-02-18 19:25:07 -0800 (-0800), Emilien Macchi wrote: > [...] > > My recommendation for TripleO devs: use the old syntax if you want your > > code to be tested by RDO Third party CI > [...] > > This is hopefully only a temporary measure? I think I've heard it > mentioned that planning is underway to switch that CI system to Zuul > v3 (perhaps after 3.0.0 officially releases soon). > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Feb 20 01:35:45 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 19 Feb 2018 17:35:45 -0800 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: <20180219150341.676l7dxwskwu3uej@yuggoth.org> References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> <20180219150341.676l7dxwskwu3uej@yuggoth.org> Message-ID: On Mon, Feb 19, 2018 at 7:03 AM, Jeremy Stanley wrote: [...] > This is hopefully only a temporary measure? I think I've heard it > mentioned that planning is underway to switch that CI system to Zuul > v3 (perhaps after 3.0.0 officially releases soon). > Adding Tristan and Fabien in copy, they know better about the roadmap. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Feb 20 01:37:25 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 19 Feb 2018 17:37:25 -0800 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline In-Reply-To: <20180219165306.GA5891@sm-xps> References: <20180219154429.GA2110@sm-xps> <20180219165306.GA5891@sm-xps> Message-ID: On Mon, Feb 19, 2018 at 8:53 AM, Sean McGinnis wrote: [...] > > The final Queens RC deadline is Thursday, 22 FEBRUARY. > Too late, you said March. Thanks a lot for the extra-month :-) /jk -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Tue Feb 20 07:07:34 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 20 Feb 2018 07:07:34 +0000 Subject: [openstack-dev] [neutron][l2gw] stable/queens tripleo issues Message-ID: Hi, At the moment the stable/queens branch is broken due to the tripleo CI test failing [i]. Does anyone have any hints here on what we should look at? I am not sure if this is with ansible/centos… Thanks Gary [i] http://logs.openstack.org/88/543188/2/check/tripleo-ci-centos-7-scenario004-multinode-oooq-container/9bc3b75/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Feb 20 07:13:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 19 Feb 2018 23:13:50 -0800 Subject: [openstack-dev] [neutron][l2gw] stable/queens tripleo issues In-Reply-To: References: Message-ID: Hey Gary :-) Yeah our CI isn't ready yet for stable/queens, sorry for that. I propose to disable TripleO jobs in stable/queens for l2gw project: https://review.openstack.org/546059 Hopefully that helps! Cheers, On Mon, Feb 19, 2018 at 11:07 PM, Gary Kotton wrote: > Hi, > > At the moment the stable/queens branch is broken due to the tripleo CI > test failing [i]. Does anyone have any hints here on what we should look > at? I am not sure if this is with ansible/centos… > > Thanks > > Gary > > [i] http://logs.openstack.org/88/543188/2/check/tripleo-ci- > centos-7-scenario004-multinode-oooq-container/9bc3b75/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Tue Feb 20 07:50:34 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 20 Feb 2018 07:50:34 +0000 Subject: [openstack-dev] [neutron][l2gw] stable/queens tripleo issues In-Reply-To: References: Message-ID: <60EB0772-1A01-4DD1-9E52-32D9E985D1C6@vmware.com> Thanks!! When its ready we can add it again. Thank you! From: Emilien Macchi Reply-To: OpenStack List Date: Tuesday, February 20, 2018 at 9:14 AM To: OpenStack List Subject: Re: [openstack-dev] [neutron][l2gw] stable/queens tripleo issues Hey Gary :-) Yeah our CI isn't ready yet for stable/queens, sorry for that. I propose to disable TripleO jobs in stable/queens for l2gw project: https://review.openstack.org/546059 Hopefully that helps! Cheers, On Mon, Feb 19, 2018 at 11:07 PM, Gary Kotton > wrote: Hi, At the moment the stable/queens branch is broken due to the tripleo CI test failing [i]. Does anyone have any hints here on what we should look at? I am not sure if this is with ansible/centos… Thanks Gary [i] http://logs.openstack.org/88/543188/2/check/tripleo-ci-centos-7-scenario004-multinode-oooq-container/9bc3b75/ __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Tue Feb 20 08:33:10 2018 From: sam47priya at gmail.com (Sam P) Date: Tue, 20 Feb 2018 17:33:10 +0900 Subject: [openstack-dev] [masakari] [masakari-monitors] : Intrusive Instance Monitoring through QEMU Guest Agent Design Update In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E9624F0FD@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9624F0FD@ALA-MBD.corp.ad.wrs.com> Message-ID: ​Hi Louie, Thank you for patch and Sorry for the delay​ response. I prefer ​option 2. >From Masakari point of view, this is an instance event. Because, even if some thing wrong inside the VM, Masakari only can try to fix it by restart, rebuilt, migrate... etc the VM. Which are the same recovery work flow for instance failures. Therefore, I prefer option 2 rather than option1. Currently, we are discussing how to implement recovery method customization feature [0] in Masakari. With this feature, you may able to call external workflows for certain failure events. For this feature, different failure models required distinguishable events and option 3 will not be appropriate. [0] https://review.openstack.org/#/c/458023/ ​> 1. define a new type of event for Intrusive Instance monitoring or > 2. add a new event within the INSTANCE_EVENTS as we may eventually integrate with instance monitoring or >3.simply reuse the LIFECYCLE/STOPPED_FAILED event ( which is what we are implementing for now.) --- Regards, Sampath On Fri, Feb 16, 2018 at 12:05 AM, Kwan, Louie wrote: > We submitted the first implementation patch for the following blueprint > > > > https://blueprints.launchpad.net/openstack/?searchtext= > intrusive-instance-monitoring > > > > i.e. https://review.openstack.org/#/c/534958/ > > > > The second patch will be pushed within a week time or so. > > > > One item we would like to seek clarification among the community is about > how we should integrate the notification within the masakari engine. > > > > One option is to reuse what has been defined at masakari/engine/instance_ > events.py. > > > > e.g. > > def masakari_notifier(self, domain_uuid): > > if self.getJournalObject(domain_uuid).getSentNotification(): > > LOG.debug('notifier.send_notification Skipped:' + domain_uuid) > > else: > > hostname = socket.gethostname() > > noticeType = ec.EventConstants.TYPE_VM > > current_time = timeutils.utcnow() > > event = { > > 'notification': { > > 'type': noticeType, > > 'hostname': hostname, > > 'generated_time': current_time, > > 'payload': { > > 'event': 'LIFECYCLE', > > 'instance_uuid': domain_uuid, > > 'vir_domain_event': 'STOPPED_FAILED' > > } > > } > > } > > LOG.debug(str(event)) > > self.notifier.send_notification(CONF.callback.retry_max, > > CONF.callback.retry_interval, > > event) > > self.getJournalObject(domain_uuid).setSentNotification(True) > > > > > > ​​ > Should we > > > > 1. define a new type of event for Intrusive Instance monitoring or > > 2. add a new event within the INSTANCE_EVENTS as we may eventually > integrate with instance monitoring or > > 3. simply reuse the LIFECYCLE/STOPPED_FAILED event ( which is what > we are implementing for now.) > > > > One of our reference test case is to detect application meltdown within VM > which QEMU may not aware the failure. The recovery should pretty much be > the same as LIFECYCLE/STOPPED_FAILED event. What do you think? > > > > Thanks. > > Louie > > > > Ntoe: > > > > Here is what we got from masakari/engine/instance_events.py > > > > These are the events which needs to be processed by masakari in case of > > instance recovery failure. > > """ > > > > INSTANCE_EVENTS = { > > # Add more events and vir_domain_events here. > > 'LIFECYCLE': ['STOPPED_FAILED'], > > 'IO_ERROR': ['IO_ERROR_REPORT'] > > } > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Feb 20 09:58:59 2018 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 20 Feb 2018 22:58:59 +1300 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: On 20 February 2018 at 04:39, Andrey Kurilin wrote: > Can someone explain me the reason for including "tests" module into > packages? Namespacing the tests makes the test ids unique which is very helpful for aggregating test data as we do. Including that in the tar.gz that is uploaded to PyPI is pretty standard - a) its how you can verify that what you downloaded works in your context (and no, going to git is not a good answer there because that means you now need the entire ecosystem of tools to build man pages etc etc etc). and b) its providing the full source of the thing we're releasing. Thats you know, how F/LOSS works. It should be possible, if we care to, to exclude the tests from wheels made from those distributions, which would make the footprint for binary usage smaller - I have no strong opinion on whether thats wise or not, but I will say I can't see a use case for needing the tests in that scenario. Similarly whether those tests should be included in Linux distribution packages or not is a debate I don't care to enter - but again I don't see a use case: binary distributions are presumed to be integration tested by the distributor, not the consumers. I don't think importing code from another packages 'tests' module is wrong or right - python is very much a consenting-adults language - but I do think that in OpenStack with the sheer number of people involved we should set very clear guidance; and I'd suggest that saying its not supported is a good default: if folk want to offer a contract where something can be imported they can always put it in a different package. In summary: - moving 'tests' to the root is a poor idea, please don't do it - we had this debate back in 2011 or so and nothing has changed that I can see. - we can, and perhaps should, exclude $package.tests from wheels to save bandwidth (but *not* from .tar.gz). - linux distributions should IMO follow what we put in wheels. -Rob From sathlang at redhat.com Tue Feb 20 10:26:16 2018 From: sathlang at redhat.com (Sofer Athlan-Guyot) Date: Tue, 20 Feb 2018 11:26:16 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: <78bc049f-ca50-c2f4-6d1c-83d106716cdb@redhat.com> References: <78bc049f-ca50-c2f4-6d1c-83d106716cdb@redhat.com> Message-ID: <87r2pgc5yv.fsf@work.i-did-not-set--mail-host-address--so-tickle-me> Hi, Zane Bitter writes: > On 17/02/18 16:40, Dan Prince wrote: >> Thanks for the update Emilien. A couple of things to add: >> >> 1) This was really difficult to pin-point via the Heat stack error >> message ('list index out of range'). I actually had to go and add >> LOG.debug statements to Heat to get to the bottom of it. I aim to sync >> with a few of the Heat folks next week on this to see if we can do >> better here. > > The message itself is pretty much all we get from yaql, even in its own > interpreter: > > (py27) cat yaql_data.json > {"data": [{"foo": "bar"}]} > (py27) yaql -d yaql_data.json > Yet Another Query Language - command-line query tool > Version 1.1.3 > Copyright (c) 2013-2017 Mirantis, Inc > > yaql> dict($.data.where($ != > null).flatten().selectMany($.items()).groupBy($[0], $[1], $.flatten())) > { > "foo": [ > "bar" > ] > } > yaql> dict($.data.where($ != > null).flatten().selectMany($.items()).groupBy($[0], $[1], [$[0], > $[1].flatten()])) > Execution exception: list index out of range > yaql> > > (Note that different lengths of data will give you different errors though.) > > The big issue here though is that for failures in validation we report > the path in the template to the function that failed, but we don't do > the same for failures in actually resolving the function at runtime. A > comprehensive fix is challenging without breaking what is supposed to be > a stable third-party plugin API, but it might be possible. Was that the > information you needed to debug this? > > We do report which resource failed, but for something with a huge > definition like allNodesConfig I can see why that might not help as much > as you'd hope. > >> 2) I had initially thought it would have been much better to revert >> the (breaking) change to python-yaql. That said it was from 2016! So I >> think our window of opportunity for the revert is probably way too >> large to consider that. Sounds like we need to publish the yaql >> package more often in RDO, etc. So your patch to update our queries is >> probably our only option. > > I _think_ this should be OK for upgrades, as long as you never do a > stack update using the existing (Pike) templates after upgrading the > undercloud to Queens, but... sadface. So as can be seen there[1] during a P->M upgrade this is not safe for upgrade :) Beside the fact that it breaks all hope of having any kind of mixed upgrade CI testing (where the undercloud is N and expected to deploy overcloud N-1) it breaks mixed version operation as well. [1] http://logs.openstack.org/62/545762/3/experimental/tripleo-ci-centos-7-scenario001-multinode-oc-upgrade/afc98a5/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-02-19_09_48_54 > I think we need to either merge Thomas's patch that gets rid of this > function altogether (https://review.openstack.org/#/c/545856/) I'm currently testing the pike's backport of this[2] in the experimental pipeline. I'll report inside the review. [2] https://review.openstack.org/#/c/546094/ >>> On Mon, Feb 19, 2018 at 03:24:24PM +0100, Bogdan Dobrelya wrote: >>> >>> With a backport of the YAQL fixes for tht made for Pike, would it be the >>> full fix to make a backport of yaql 1.1.3 for Pike repos as well? Or am I >>> missing something? >> >> At some level that should be fine. In the broader OpenSteck perspective >> we've been gating with yaql 1.1.3 from pypi for releases > newton. So >> while updating the yaql package on pike to 1.1.3 isn't guaranteed to be >> safe it should be fine. So we fast forward upgrade, we need to make sure that master undercloud is able to deploy the newton templates for CI testing and support of some mixed version operations. So if Thomas' patch enables us to support both yaql version that would be ideal from an upgrade perspective. It will need backport all the way to newton. > backport it to older versions of t-h-t, or make yaql itself > backward-compatible by doing something like > https://review.openstack.org/#/c/545996/ > > cheers, > Zane. > >> On Fri, Feb 16, 2018 at 8:36 PM, Emilien Macchi wrote: >>> Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy >>> aggregation. >>> >>> The commit that broke it is >>> https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025 >>> >>> It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032 >>> But Alex and I figured (after a strong headache) that we needed to update >>> the query like this: https://review.openstack.org/545498 >>> >>> It would be great to avoid this kind of change within minor versions, please >>> please. >>> >>> Happy weekend, >>> >>> PS: I'm adding YAQL to my linkedin profile right now. >> >> Be careful here. Do you really want to write YAQL queries all day! >> >> Dan >> >>> -- >>> Emilien Macchi >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Sofer From doug at doughellmann.com Tue Feb 20 14:46:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 20 Feb 2018 09:46:01 -0500 Subject: [openstack-dev] [oslo.db] [all] please DO NOT IMPORT from oslo_db.tests.* ! projects doing this need to revert ASAP In-Reply-To: References: Message-ID: <1519137871-sup-2379@lrrr.local> Excerpts from Robert Collins's message of 2018-02-20 22:58:59 +1300: > On 20 February 2018 at 04:39, Andrey Kurilin wrote: > > Can someone explain me the reason for including "tests" module into > > packages? > > Namespacing the tests makes the test ids unique which is very helpful > for aggregating test data as we do. Including that in the tar.gz that > is uploaded to PyPI is pretty standard - a) its how you can verify > that what you downloaded works in your context (and no, going to git > is not a good answer there because that means you now need the entire > ecosystem of tools to build man pages etc etc etc). and b) its > providing the full source of the thing we're releasing. Thats you > know, how F/LOSS works. Thanks, Robert, that's the argument I was trying to remember earlier in the thread. > It should be possible, if we care to, to exclude the tests from wheels > made from those distributions, which would make the footprint for > binary usage smaller - I have no strong opinion on whether thats wise > or not, but I will say I can't see a use case for needing the tests in > that scenario. > > Similarly whether those tests should be included in Linux distribution > packages or not is a debate I don't care to enter - but again I don't > see a use case: binary distributions are presumed to be integration > tested by the distributor, not the consumers. > > I don't think importing code from another packages 'tests' module is > wrong or right - python is very much a consenting-adults language - > but I do think that in OpenStack with the sheer number of people > involved we should set very clear guidance; and I'd suggest that > saying its not supported is a good default: if folk want to offer a > contract where something can be imported they can always put it in a > different package. Exactly. The Oslo team struggles sometimes to support project teams using the libraries in unexpected ways. This is one of those unexpected uses, and I think we don't want to support it because we have not designed the test suite with backwards compatibility in mind. We just need to be clear about that. > > In summary: > - moving 'tests' to the root is a poor idea, please don't do it - we > had this debate back in 2011 or so and nothing has changed that I can > see. > - we can, and perhaps should, exclude $package.tests from wheels to > save bandwidth (but *not* from .tar.gz). > - linux distributions should IMO follow what we put in wheels. > > -Rob > From dtroyer at gmail.com Tue Feb 20 14:52:19 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 20 Feb 2018 08:52:19 -0600 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: <1518986610-sup-9087@lrrr.local> References: <20180217210312.mv43be7re73vac2i@yuggoth.org> <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> <20180218160151.4m6yzuvd7pdq7c2c@yuggoth.org> <1518986610-sup-9087@lrrr.local> Message-ID: >> On 2018-02-18 03:55:51 -0600 (-0600), Monty Taylor wrote: >> > That said - I completely agree with fungi on the description of >> > the tradeoffs of each direction, and I do think it's valuable to >> > pick one for the docs. FWIW, DevStack declared long ago that it was built to use bash, even though in some cases the shebang may even be /bin/sh. While other similar shells have been accommodated in the past they are considered unsupported because we do not expect reviewers to know the subtlties of similar shells. Those who use zsh and friends are on the hook to maintain the compatibility and their patches are (mostly?) accepted. dt -- Dean Troyer dtroyer at gmail.com From bodenvmw at gmail.com Tue Feb 20 15:19:09 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Tue, 20 Feb 2018 08:19:09 -0700 Subject: [openstack-dev] [neutron][networking-vsphere] neutron-lib patches for review Message-ID: <1ff0a91f-14bd-10d2-5436-efd146f3a8a8@gmail.com> Could I please ask the folks from networking-vsphere to keep an eye on their review queue for incoming neutron-lib related patches? Today we have at least 3 in the networking-vsphere queue that haven't gotten a core review for over a month. In order to keep the neutron-lib effort moving and stable, it's important for networking projects using stable branches to assist with these reviews. In general the code changes are minimal so I hoping this isn't asking for a lot of time. Thanks much From kendall at openstack.org Tue Feb 20 16:44:20 2018 From: kendall at openstack.org (Kendall Waters) Date: Tue, 20 Feb 2018 10:44:20 -0600 Subject: [openstack-dev] Community Voting NOW OPEN - OpenStack Summit Vancouver 2018 Message-ID: Hi everyone, Session voting is now open for the May 2018 OpenStack Summit in Vancouver! VOTE HERE Hurry, voting closes Sunday, February 25 at 11:59pm Pacific Time (Monday, February 26 at 7:59 UTC). The Programming Committees will ultimately determine the final schedule. Community votes are meant to help inform the decision, but are not considered to be the deciding factor. The Programming Committee members exercise judgment in their area of expertise and help ensure diversity. View full details of the session selection process here . Continue to visit https://www.openstack.org/summit/vancouver-2018 for all Summit-related information. REGISTER Register HERE before prices increase in early April! VISA APPLICATION PROCESS More information about the visa process can be found HERE . TRAVEL SUPPORT PROGRAM March 22 is the last day to submit applications. Please submit your applications HERE by 11:59pm Pacific Time (March 23 at 6:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Feb 20 18:13:00 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 20 Feb 2018 10:13:00 -0800 Subject: [openstack-dev] [tripleo] The Weekly Owl - 10th Edition Message-ID: Note: this is the tenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127331.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Focus is still on releasing Queens RC1 and branching stable/queens before the end of February if possible. +--> PTG draft scheduled on https://etherpad.openstack.org/p/tripleo-ptg-rocky +--> PTG is next week, your fellow reporter will be too busy to write a weekly owl, next edition on March 6th with fresh post-ptg news! +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Ronelle and ruck is Arx. Please let them know any new CI issue. +--> Master promotion (and Queens) is 22 days, Pike is 0 days and Ocata is 0 days. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Work in progress: FFU, Queens update/upgrade workflows +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status and https://etherpad.openstack.org/p/tripleo-upgrade-squad-meeting +---------------+ | Containers | +---------------+ +--> Containerized undercloud is the major ongoing effort in the squad. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +--------------+ | Integration | +--------------+ +--> No major updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Team is still planning work in Rocky and preparing RC1. +--> Still working Automated UI testing in CI +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No major updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Containerized Neutron has a regression where DHCP server and L3 routers fail if the respective agent container is stopped. Fix is WIP. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Node deletion is broken, team is working on https://bugs.launchpad.net/tripleo/+bug/1749426 +--> Team is planning PTG: https://etherpad.openstack.org/p/tripleo-workflows-squad-ptg +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +------------+ | Owl fact | +------------+ Owls can rotate their necks 270 degrees. A blood-pooling system collects blood to power their brains and eyes when neck movement cuts off circulation. Source: http://www.audubon.org/news/11-fun-facts-about-owls Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Feb 20 18:36:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 20 Feb 2018 18:36:24 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-08 Message-ID: HTML: https://anticdent.org/tc-report-18-08.html Most TC activity has either been in preparation for the [PTG](https://www.openstack.org/ptg) or stalling to avoid starting something that won't be finished before the PTG. But a few discussions to point at. # When's the Next PTG? Last [Tuesday evening](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-13.log.html#t2018-02-13T20:10:37) had a brief discussion asking when (and where) the next PTG will be, after Dublin. The answer? We don't know yet. It will likely come up in Dublin. # Base Services and Eventlet A [question about base services](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-15.log.html#t2018-02-15T15:08:21) led to discussion about ways to technically and socially avoid the use of eventlet. Notable was the observation that we continue to have new projects that adopt patterns established in Nova that while perfectly workable are no longer considered ideal. There's some work to do to make sure we provide a bit more guidance. # Naming for S Release Rocky is starting, so it is time to be thinking about [naming for S](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-15.log.html#t2018-02-15T15:34:37). Berlin is the geographic location that will be the source for names beginning with "S". # Python 3.6 Most of [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-16.log.html) was devoted to Python 3.6. Many distros are headed that way and OpenStack CI is currently 2.7 and 3.5. # TC Topics at the PTG A reminder that there is an [etherpad for TC topics](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics) at the PTG. Because of the PTG there won't be a TC Report next week, but I will endeavor to write up standalone reports of the discussions started by that etherpad. Those discussion will hopefully grant a bit more vigor, drive, and context to the TC Report, which has wandered a bit of late. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Feb 20 18:52:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 20 Feb 2018 13:52:33 -0500 Subject: [openstack-dev] [ptl][all][release] published release notes may be out of date Message-ID: <1519152697-sup-9218@lrrr.local> Thomas Bechtold reported some issues with the oslo.config release notes not showing some information. As part of investigating the problem, I discovered that reno's own release notes were not up to date, either. I think the problem is a combination of a (now fixed) reno bug [1] and some unrelated issues with the doc publishing jobs. The issue is most likely to affect libraries, since those have been frozen and may not have landed patches to retrigger the publishing jobs, but server projects without a lot of activity over the last few weeks may also have out of date release notes. Please take a look at your projects' release notes and documentation. If they were last updated before 1 Feb 2018, land a trivial (or real) patch to the project to cause the documentation jobs to run. If you have a current global requirements sync patch or translation patch those are good candidates. The change in [2] is an example of a trivial patch that might be good if you need one. Doug [1] https://bugs.launchpad.net/reno/+bug/1746076 [2] https://review.openstack.org/545846 From isaku.yamahata at gmail.com Fri Feb 16 19:14:43 2018 From: isaku.yamahata at gmail.com (Isaku Yamahata) Date: Fri, 16 Feb 2018 11:14:43 -0800 Subject: [openstack-dev] [neutron] Generalized issues in the unit testing of ML2 mechanism drivers In-Reply-To: References: Message-ID: <20180216191443.GA13722@private.email.ne.jp> On Tue, Feb 13, 2018 at 05:48:32PM -0500, Assaf Muller wrote: > On Wed, Dec 13, 2017 at 7:30 AM, Michel Peterson wrote: > > Through my work in networking-odl I've found what I believe is an issue > > present in a majority of ML2 drivers. An issue I think needs awareness so > > each project can decide a course of action. > > > > The issue stems from the adopted practice of importing > > `neutron.tests.unit.plugins.ml2.test_plugin` and creating classes with noop > > operation to "inherit" tests for free [1]. The idea behind is nice, you > > inherit >600 tests that cover several scenarios. > > > > There are several issues of adopting this pattern, two of which are > > paramount: > > > > 1. If the mechanism driver is not loaded correctly [2], the tests then don't > > test the mechanism driver but still succeed and therefore there is no > > indication that there is something wrong with the code. In the case of > > networking-odl it wasn't discovered until last week, which means that for >1 > > year it this was adding PASSed tests uselessly. > > > > 2. It gives a false sense of reassurance. If the code of those tests is > > analyzed it's possible to see that the code itself is mostly centered around > > testing the REST endpoint of neutron than actually testing that the > > mechanism succeeds on the operation it was supposed to test. As a result of > > this, there is marginally added value on having those tests. To be clear, > > the hooks for the respective operations are called on the mechanism driver, > > but the result of the operation is not asserted. > > > > I would love to hear more voices around this, so feel free to comment. > > > > Regarding networking-odl the solution I propose is the following: > > **First**, discard completely the change mentioned in the footnote #2. > > **Second**, create a patch that completely removes the tests that follow > > this pattern. > > An interesting exercise would be to add 'raise ValueError' type > exceptions in various ODL ML2 mech driver flows and seeing which tests > fail. Basically, if a test passes without the ODL mech driver loaded, > or with a faulty ODL mech driver, then you don't need to run the test > for networking-odl changes. I'd be hesitant to remove all tests > though, it's a good investment of time to figure out which tests are > valuable to you. Mike and Michel should raise it at the PTG for discussion. I know Mike will attend. thanks, -- Isaku Yamahata From Louie.Kwan at windriver.com Tue Feb 20 21:11:53 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 20 Feb 2018 21:11:53 +0000 Subject: [openstack-dev] [masakari] [masakari-monitors] : Intrusive Instance Monitoring through QEMU Guest Agent Design Update In-Reply-To: References: <47EFB32CD8770A4D9590812EE28C977E9624F0FD@ALA-MBD.corp.ad.wrs.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E96251A58@ALA-MBD.corp.ad.wrs.com> Hi Sam Make sense and will do option 2. FYI, we are planning to upload the second iim patch within a week and may do the masakari engine in different patch shortly. Thanks for your reply. Louie From: Sam P [mailto:sam47priya at gmail.com] Sent: Tuesday, February 20, 2018 3:33 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] [masakari-monitors] : Intrusive Instance Monitoring through QEMU Guest Agent Design Update ​Hi Louie, Thank you for patch and Sorry for the delay​ response. I prefer ​option 2. From Masakari point of view, this is an instance event. Because, even if some thing wrong inside the VM, Masakari only can try to fix it by restart, rebuilt, migrate... etc the VM. Which are the same recovery work flow for instance failures. Therefore, I prefer option 2 rather than option1. Currently, we are discussing how to implement recovery method customization feature [0] in Masakari. With this feature, you may able to call external workflows for certain failure events. For this feature, different failure models required distinguishable events and option 3 will not be appropriate. [0] https://review.openstack.org/#/c/458023/ ​> 1. define a new type of event for Intrusive Instance monitoring or > 2. add a new event within the INSTANCE_EVENTS as we may eventually integrate with instance monitoring or >3.simply reuse the LIFECYCLE/STOPPED_FAILED event ( which is what we are implementing for now.) --- Regards, Sampath On Fri, Feb 16, 2018 at 12:05 AM, Kwan, Louie > wrote: We submitted the first implementation patch for the following blueprint https://blueprints.launchpad.net/openstack/?searchtext=intrusive-instance-monitoring i.e. https://review.openstack.org/#/c/534958/ The second patch will be pushed within a week time or so. One item we would like to seek clarification among the community is about how we should integrate the notification within the masakari engine. One option is to reuse what has been defined at masakari/engine/instance_events.py. e.g. def masakari_notifier(self, domain_uuid): if self.getJournalObject(domain_uuid).getSentNotification(): LOG.debug('notifier.send_notification Skipped:' + domain_uuid) else: hostname = socket.gethostname() noticeType = ec.EventConstants.TYPE_VM current_time = timeutils.utcnow() event = { 'notification': { 'type': noticeType, 'hostname': hostname, 'generated_time': current_time, 'payload': { 'event': 'LIFECYCLE', 'instance_uuid': domain_uuid, 'vir_domain_event': 'STOPPED_FAILED' } } } LOG.debug(str(event)) self.notifier.send_notification(CONF.callback.retry_max, CONF.callback.retry_interval, event) self.getJournalObject(domain_uuid).setSentNotification(True) ​​ Should we 1. define a new type of event for Intrusive Instance monitoring or 2. add a new event within the INSTANCE_EVENTS as we may eventually integrate with instance monitoring or 3. simply reuse the LIFECYCLE/STOPPED_FAILED event ( which is what we are implementing for now.) One of our reference test case is to detect application meltdown within VM which QEMU may not aware the failure. The recovery should pretty much be the same as LIFECYCLE/STOPPED_FAILED event. What do you think? Thanks. Louie Ntoe: Here is what we got from masakari/engine/instance_events.py These are the events which needs to be processed by masakari in case of instance recovery failure. """ INSTANCE_EVENTS = { # Add more events and vir_domain_events here. 'LIFECYCLE': ['STOPPED_FAILED'], 'IO_ERROR': ['IO_ERROR_REPORT'] } __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Tue Feb 20 21:22:54 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 20 Feb 2018 21:22:54 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: Dear all, updates: - host/group vars: zuul now supports declaring host and group vars in the job definition [0][1] - thanks corvus and infra team! This is a great help towards writing the devstack and tempest base multinode jobs [2][3] * NOTE: zuul merges dict variables through job inheritance. Variables in host/group_vars override global ones. I will write some examples further clarify this. - stable/pike: devstack ansible changes have been backported to stable/pike, so we can now run zuulv3 jobs against stable/pike too - thank you tosky! next change in progress related to pike is to provide tempest-full-pike for branchless repositories [4] - documentation: devstack now publishes documentation on its ansible roles [5]. More devstack documentation patches are in progress to provide jobs reference, examples and a job migration how-to [6]. Andrea Frittoli (andreaf) [0] https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host_vars [1] https://docs.openstack.org/infra/zuul/user/config.html#attr-job.group_vars [2] https://review.openstack.org/#/c/545696/ [3] https://review.openstack.org/#/c/545724/ [4] https://review.openstack.org/#/c/546196/ [5] https://docs.openstack.org/devstack/latest/roles.html [6] https://review.openstack.org/#/c/545992/ On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli wrote: > Dear all, > > updates: > - tempest-full-queens and tempest-full-py3-queens are now available for > testing of branchless repositories [0]. They are used for tempest and > devstack-gate. If you own a tempest plugin in a branchless repo, you may > consider adding similar jobs to your plugin if you use it for tests on > stable/queen as well. > - if you have migrated jobs based on devstack-tempest please let me know, > I'm building reference docs and I'd like to include as many examples as > possible > - work on multi-node is in progress, but not ready still - you can follow > the patches in the multinode branch [1] > - updates on some of the points from my previous email are inline below > > Andrea Frittoli (andreaf) > > [0] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n73 > [1] > https://review.openstack.org/#/q/status:open++branch:master+topic:multinode > > > > On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli < > andrea.frittoli at gmail.com> wrote: > >> Dear all, >> >> this is the first or a series of ~regular updates on the migration of >> Tempest / Grenade jobs to Zuul v3 native. >> >> The QA team together with the infra team are working on providing the >> OpenStack community with a set of base Tempest / Grenade jobs that can be >> used as a basis to write new CI jobs / migrate existing legacy ones with a >> minimal effort and very little or no Ansible knowledge as a precondition. >> >> The effort is tracked in an etherpad [0]; I'm trying to keep the >> etherpad up to date but it may not always be a source of truth. >> >> Useful jobs available so far: >> - devstack-tempest [0] is a simple tempest/devstack job that runs >> keystone glance nova cinder neutron swift and tempest *smoke* filter >> - tempest-full [1] is similar but runs a full test run - it replaces the >> legacy tempest-dsvm-neutron-full from the integrated gate >> - tempest-full-py3 [2] runs a full test run on python3 - it replaces the >> legacy tempest-dsvm-py35 >> > > Some more details on this topic: what I did not mention in my previous > email is that the autogenerated Tempest / Grenade CI jobs (legacy-* > playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To > create Zuul V3 Tempest / Grenade native jobs for your projects you need to > through away the legacy playbooks and defined new jobs in .zuul.yaml, as > documented in the zuul v3 docs [2]. > The parent job for a single node Tempest job will usually be > devstack-tempest. Example migrated jobs are avilable, for instance: [3] [4]. > > [2] > https://docs.openstack.org/infra/manual/zuulv3.html#howto-update-legacy-jobs > > [3] > http://git.openstack.org/cgit/openstack/sahara-tests/tree/.zuul.yaml#n21 > [4] https://review.openstack.org/#/c/543048/5 > > >> >> Both tempest-full and tempest-full-py3 are part of integrated-gate >> templates, starting from stable/queens on. >> The other stable branches still run the legacy jobs, since >> devstack ansible changes have not been backported (yet). If we do backport >> it will be up to pike maximum. >> >> Those jobs work in single node mode only at the moment. Enabling >> multinode via job configuration only require a new Zuul feature [4][5] that >> should be available soon; the new feature allows defining host/group >> variables in the job definition, which means setting variables which are >> specific to one host or a group of hosts. >> Multinode DVR and Ironic jobs will require migration of the ovs-* roles >> form devstack-gate to devstack as well. >> >> Grenade jobs (single and multinode) are still legacy, even if the >> *legacy* word has been removed from the name. >> They are currently temporarily hosted in the neutron repository. They are >> going to be implemented as Zuul v3 native in the grenade repository. >> >> Roles are documented, and a couple of migration tips for DEVSTACK_GATE >> flags is available in the etherpad [0]; more comprehensive examples / >> docs will be available as soon as possible. >> >> Please let me know if you find this update useful and / or if you would >> like to see different information in it. >> I will send further updates as soon as significant changes / new features >> become available. >> >> Andrea Frittoli (andreaf) >> >> [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs >> >> [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 >> [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 >> [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 >> [4] https://etherpad.openstack.org/p/zuulv3-group-variables >> [5] https://review.openstack.org/#/c/544562/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Tue Feb 20 22:02:09 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Tue, 20 Feb 2018 22:02:09 -0000 Subject: [openstack-dev] [manila] manila 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for manila for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/manila/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/manila/log/?h=stable/queens Release notes for manila can be found at: http://docs.openstack.org/releasenotes/manila/ From Louie.Kwan at windriver.com Tue Feb 20 22:16:31 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 20 Feb 2018 22:16:31 +0000 Subject: [openstack-dev] [masakari] [masakari-monitors] : Masakari notification failed. Message-ID: <47EFB32CD8770A4D9590812EE28C977E96251BFB@ALA-MBD.corp.ad.wrs.com> Hi Masakari community, I would like to get your help to understand what may be causing the Masakari notification failed. I do get success cases which the engine got the notification, VM got shutdown and rebooted ok. Having said that, there are some cases that the notification failed and it seems there are some conflicts going on. 20% to 40% chance. Feb 20 21:53:21 masakari-2 masakari-engine[3807]: 2018-02-20 21:53:21.517 WARNING masakari.engine.drivers.taskflow.driver [req-ce909151-1afb-4f2f-abf4-f25d54f25c6b service None] Task 'masakari.engine.drivers.taskflow.instance_failure.StopInstanceTask;instance:recovery' (e85dec06-1498-482c-a63a-51f855745c32) transitioned into state 'FAILURE' from state 'RUNNING' Feb 20 21:53:21 masakari-2 masakari-engine[3807]: 1 predecessors (most recent first): Feb 20 21:53:21 masakari-2 masakari-engine[3807]: Flow 'instance_recovery_engine': Conflict: Conflict Is it normal that masakari notification would be failed because of timing or conflicting events? FYI, I only have one VM and one active notification. Enclosed is the log file I got from the engine. I do appreciate if anyone of you can provide some insight what to do with the failure. Any tip where to look at etc? Timeout? Thanks. Louie | notification_uuid | generated_time | status | type | source_host_uuid | payload | +--------------------------------------+----------------------------+----------+------+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ | 42ccee84-0ea5-4163-84a5-028a0bb914a3 | 2018-02-20T21:52:03.000000 | failed | VM | 66c8b5b9-03f5-4843-8a9c-fa83af807a9b | {u'instance_uuid': u'565da9ba-3c0c-4087-83ca-32a5a1b00a55', u'vir_domain_event': u'STOPPED_FAILED', u'event': u'QEMU_GUEST_AGENT_ERROR'} | | aa4184f3-b002-4ba8-a403-f22ccd4ce6b5 | 2018-02-20T21:42:54.000000 | finished | VM | 66c8b5b9-03f5-4843-8a9c-fa83af807a9b | {u'instance_uuid': u'565da9ba-3c0c-4087-83ca-32a5a1b00a55', u'vir_domain_event': u'STOPPED_FAILED', u'event': u'QEMU_GUEST_AGENT_ERROR'} | -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: err.log Type: application/octet-stream Size: 269668 bytes Desc: err.log URL: From corvus at inaugust.com Tue Feb 20 23:28:25 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 20 Feb 2018 15:28:25 -0800 Subject: [openstack-dev] [all][infra] Some new Zuul features Message-ID: <87mv039r6u.fsf@meyer.lemoncheese.net> Hi, We've rolled out a few new Zuul features you may find useful. Added a post-timeout job attribute ================================== We refined the way timeouts are handled. The "timeout" attribute of a job (which defaults to 30 minutes but can be changed by any job) now covers the time used in the pre-run and run phases of the job. There is now a separate "post-timeout" attribute, which also defaults to 30 minutes, that covers the "post-run" phase of the job. This means you can adjust the timeout setting for a long running job, and maintain a lower post-timeout setting so that if the job encounters a problem in the post-run phase, we aren't waiting 3 hours for it to time out. You generally shouldn't need to adjust this value, unless you have a job which performs a long artifact upload in its post-run phase. Docs: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.post-timeout Added host and group vars ========================= We added two new job attributes, "host-vars" and "group-vars" which behave just like "vars" in that they define variables for use by Ansible, but they apply to specific hosts or host groups respectively, whereas "vars" applies to all hosts. Docs: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host-vars -Jim From pabelanger at redhat.com Wed Feb 21 01:19:59 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 20 Feb 2018 20:19:59 -0500 Subject: [openstack-dev] Release Naming for S - time to suggest a name! Message-ID: <20180221011959.GA30957@localhost.localdomain> Hey everybody, Once again, it is time for us to pick a name for our "S" release. Since the associated Summit will be in Berlin, the Geographic Location has been chosen as "Berlin" (State). Nominations are now open. Please add suitable names to https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now and 2018-03-05 23:59 UTC. In case you don't remember the rules: * Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. * The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. * The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Let the naming begin. Paul From gmann at ghanshyammann.com Wed Feb 21 01:23:22 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Feb 2018 10:23:22 +0900 Subject: [openstack-dev] [QA] [all] QA Rocky PTG Planning In-Reply-To: References: Message-ID: Hi All, As we are close to PTG, I have prepared the QA PTG Schedule - https://ethercalc.openstack.org/Rocky-PTG-QA-Schedule Detail of each sessions can be found in this etherpad - https://etherpad.openstack.org/p/qa-rocky-ptg We still have space for more sessions or topic if any of you would like to add. If so please write those to etherpad with your irc name. Sessions Scheduled is flexible and we can reschedule based on request but do let me know before 22nd Feb. We do have some sessions where we need cross projects interaction and QA help rooms which i will be publishing separately. -gmann On Thu, Jan 18, 2018 at 7:32 PM, Andrea Frittoli wrote: > and the link [1] > > [1] https://etherpad.openstack.org/p/qa-rocky-ptg > > On Thu, Jan 18, 2018 at 10:28 AM Andrea Frittoli > wrote: >> >> Dear all, >> >> I started the etherpad for planning the QA work in Dublin. >> Please add your ideas / proposals for sessions and intention of attending. >> We have a room for the QA team for three full days Wed-Fri. >> >> This time I also included a "Request for Sessions" - if anyone would like >> to discuss a QA related topic with the QA team or do a hands-on / sprint on >> something feel free to add it in there. We can also handle them in a less >> unstructured format during the PTG - but if there are a few requests on >> similar topics we can setup a session on Mon/Tue for everyone interested to >> attend. >> >> Andrea Frittoli (andreaf) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Wed Feb 21 02:01:11 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Feb 2018 11:01:11 +0900 Subject: [openstack-dev] [QA] Queens Retrospective Message-ID: Hi All, I have started an etherpad for a Queens cycle retrospective for QA - https://etherpad.openstack.org/p/qa-queens-retrospective This will be discussed in PTG on Wed 9.30-10.00 AM, so please add your feedback/comment before that. Everyone is welcome to add the feedback which help us to improve the required things in next cycle. -gmann From gmann at ghanshyammann.com Wed Feb 21 03:33:47 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Feb 2018 12:33:47 +0900 Subject: [openstack-dev] [all] [QA] PTG QA Help Room on Monday-Tuesday: open discussion + stestr + tempest plugins Message-ID: Hi All, As you might have notice about QA Help Rooms in mail thread[1], this is further announcement of the QA Help Room from Monday- Tuesday in Dublin PTG. QA team will be present in "Davin Suite" along with infra, stable, release and other horizontal team [2]. Main idea for conducting the helproom is to get together and discuss the topuics or any help you want from QA or knowledge sharing etc. We are open to discuss any topic during helproom. Along with that we also have 2 dedicated Topic to cover. Schedule & Details for QA helproom can be found in QA PTG etherpad [3] 1. "Stestr Sessions followed by Q&A". - Interested people can learn about stestr, how to use, migrate to stestr. If you have some feedback or improvement points Or i will say if you need any kind of help on stestr then, this is good sessions to join. Thanks to matt and masayuki for giving time for this. 2. "Tempest Plugin, May we Assist you". - We will spend some dedicated time for Tempest Plugins. There are always many queries about what all tempest interfaces plugins should use and sometime complain (or many time :)) Tempest always change their interface and break plugins. Let's understand the interface change issue and what interfaces plugins should and should not use. Though we have very clear documentation for that but still it is good to talk face to face. If you want to implement new Tempest plugin, you can ask us how to do or if you have existing plugin, you can ask for any kind of help/queries. In addition, feedback/improvement/specific requirement can be discussed. 3. Open Discussion - Everyone is most welcome to discuss any topic, learn about QA, help us or get help from us. .. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127407.html .. [2] http://ptg.openstack.org/ptg.html .. [3] https://ethercalc.openstack.org/Rocky-PTG-QA-Schedule https://etherpad.openstack.org/p/qa-rocky-ptg -gmann From no-reply at openstack.org Wed Feb 21 05:45:19 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 21 Feb 2018 05:45:19 -0000 Subject: [openstack-dev] [horizon] horizon 13.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for horizon for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/horizon/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/horizon/log/?h=stable/queens Release notes for horizon can be found at: http://docs.openstack.org/releasenotes/horizon/ From no-reply at openstack.org Wed Feb 21 05:52:38 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 21 Feb 2018 05:52:38 -0000 Subject: [openstack-dev] [octavia] octavia 2.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for octavia for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/octavia/log/?h=stable/queens Release notes for octavia can be found at: http://docs.openstack.org/releasenotes/octavia/ From no-reply at openstack.org Wed Feb 21 06:02:06 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 21 Feb 2018 06:02:06 -0000 Subject: [openstack-dev] [octavia] octavia-dashboard 1.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for octavia-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/octavia-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/octavia-dashboard/log/?h=stable/queens Release notes for octavia-dashboard can be found at: http://docs.openstack.org/releasenotes/octavia-dashboard/ If you find an issue that could be considered release-critical, please file it at: https://storyboard.openstack.org/#!/project/909 and tag it *queens-rc-potential* to bring it to the octavia-dashboard release crew's attention. From y.furukawa_2 at jp.fujitsu.com Wed Feb 21 06:49:06 2018 From: y.furukawa_2 at jp.fujitsu.com (Furukawa, Yushiro) Date: Wed, 21 Feb 2018 06:49:06 +0000 Subject: [openstack-dev] Etherpad for self-healing Message-ID: Hi everyone, I am seeing Self-healing scheduled on Tuesday afternoon[1], but the etherpad for it is not listed in [2]. I made following etherpad by some chance. Would it be possible to update Etherpads wiki page? https://etherpad.openstack.org/p/self-healing-ptg-rocky Best regards, [1] https://www.openstack.org/ptg/#tab_schedule [2] https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads ---- Yushiro Furukawa From mrunge at redhat.com Wed Feb 21 08:40:33 2018 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 21 Feb 2018 09:40:33 +0100 Subject: [openstack-dev] [kolla]Fwd: [Openstack-stable-maint] Stable check of openstack/kolla failed In-Reply-To: <20180219072749.ez7w63kii3zs7kgs@sofja.berg.ol> References: <5548F4AD-589D-45A4-AE69-DFCEB68B1216@gmx.com> <20180219072749.ez7w63kii3zs7kgs@sofja.berg.ol> Message-ID: On 19/02/18 08:27, Matthias Runge wrote: > On Sun, Feb 18, 2018 at 05:15:18AM -0600, Sean McGinnis wrote: >> Hello kolla team, >> >> It looks like stable builds for kolla have been failing for some time now. Just forwarding this on to make sure the team is aware of it before the need for a stable release comes up. >> >>> Build failed. >>> Looking at this again, for example the centos-binary push job times out. Especially, this here[1] is apparently started, but has not ended. Unfortunately, I haven't been able to find any logs regarding docker push 2018-02-21 07:08:57.101374 | primary | 92bd2d995bd5: Pushed 2018-02-21 07:08:58.693761 | POST-RUN END RESULT_TIMED_OUT: [untrusted : git.openstack.org/openstack/kolla/tests/playbooks/publish.yml at stable/ocata] 2018-02-21 07:08:58.693989 | POST-RUN START: [untrusted : git.openstack.org/openstack/kolla/tests/playbooks/post.yml at stable/ocata] 2018-02-21 07:09:00.270882 | 2018-02-21 07:09:00.271153 | PLAY [all] 2018-02-21 07:09:00.416795 | 2018-02-21 07:09:00.417030 | TASK [shell] 2018-02-21 07:09:01.129686 | primary | /usr/bin/journalctl 2018-02-21 07:09:01.394259 | primary | Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 2018-02-21 07:09:01.394390 | primary | has been opened read-only. 2018-02-21 07:09:01.395668 | primary | Error: /dev/sr0: unrecognised disk label 2018-02-21 07:09:02.797357 | primary | WARNING: bridge-nf-call-iptables is disabled 2018-02-21 07:09:02.797457 | primary | WARNING: bridge-nf-call-ip6tables is disabled 2018-02-21 07:09:06.029466 | primary | ok: Runtime: 0:00:04.787576 2018-02-21 07:09:06.079653 | 2018-02-21 07:09:06.079927 | TASK [synchronize] [1] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/kolla/stable/ocata/kolla-publish-centos-binary/dc859f1/ara/reports/a1a13a4a-3e90-4290-82e4-e031ce6029ad.html -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From edouard.thuleau at gmail.com Wed Feb 21 10:30:14 2018 From: edouard.thuleau at gmail.com (=?UTF-8?Q?=C3=89douard_Thuleau?=) Date: Wed, 21 Feb 2018 11:30:14 +0100 Subject: [openstack-dev] [nova] Contrail VIF TAP plugging broken Message-ID: Hi Seán, Michael, Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to plug TAP on the Contrail software switch (named vrouter) [2]. I proposed a fix in the beginning of the year [3] but it still pending approval even it got a couple of +1 and no negative feedback. It's why I'm writing that email to get your attention. That issue appeared during the Queens development cycle and we need to fix that before it was released (hope we are not to late). Contrail already started to move on os-vif driver [4]. A first VIF type driver is there for DPDK case [5], we plan to do the same for the TAP case in the R release and remove the Nova VIF plugging code for the vrouter. [1] https://review.openstack.org/#/c/515916/ [2] https://bugs.launchpad.net/nova/+bug/1742963 [3] https://review.openstack.org/#/c/533212/ [4] https://github.com/Juniper/contrail-nova-vif-driver [5] https://review.openstack.org/#/c/441183/ Regards, Édouard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Feb 21 12:14:25 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 21 Feb 2018 13:14:25 +0100 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180221131425.ef2a6bb0b7a585ba95d56306@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From Paul.Vaduva at enea.com Wed Feb 21 13:22:02 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 21 Feb 2018 13:22:02 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: doctor_vitrage.py Type: application/octet-stream Size: 3613 bytes Desc: doctor_vitrage.py URL: From Paul.Vaduva at enea.com Wed Feb 21 14:30:50 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 21 Feb 2018 14:30:50 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: I attached also the driver.py that I am using. From: Paul Vaduva [mailto:Paul.Vaduva at enea.com] Sent: Wednesday, February 21, 2018 3:22 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: [Attachment removed] Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: driver.py Type: application/octet-stream Size: 5846 bytes Desc: driver.py URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: doctor_vitrage.py Type: application/octet-stream Size: 3613 bytes Desc: doctor_vitrage.py URL: From Paul.Vaduva at enea.com Wed Feb 21 14:35:11 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 21 Feb 2018 14:35:11 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Sorry forgot to add you. From: Paul Vaduva Sent: Wednesday, February 21, 2018 4:31 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: RE: [openstack-dev] [vitrage] Vitrage alarm processing behavior I attached also the driver.py that I am using. From: Paul Vaduva [mailto:Paul.Vaduva at enea.com] Sent: Wednesday, February 21, 2018 3:22 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: [Attachment removed] Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Feb 21 14:35:15 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 21 Feb 2018 15:35:15 +0100 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env Message-ID: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Hi there! I'm having big trouble package heat-dashboard for Debian. I hope I can get help through this list. In here: heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/js/ we have minified *only* versions of Javascript. 1/ Why is there only minified versions? That's non-free to me, Debian, and probably any other distro caring about OpenStack. 2/ Why do we even have a folder called "vendors"? Doesn't this sound really a bad practice? 3/ Why is there so many angular-*.min.js files? Do we need them all? 4/ Why isn't the package using xstatic-angular and friends? As it stands, I can't upload heat-dashboard to Debian for Queens, and it's been removed from Horizon... :( Oh, and I almost forgot! When running unit tests, I get: PYTHON=python$i NOSE_WITH_OPENSTACK=1 \ NOSE_OPENSTACK_COLOR=1 \ NOSE_OPENSTACK_RED=0.05 \ NOSE_OPENSTACK_YELLOW=0.025 \ NOSE_OPENSTACK_SHOW_ELAPSED=1 \ DJANGO_SETTINGS_MODULE=heat_dashboard.test.settings \ python$i /home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py test heat_dashboard.test --settings=heat_dashboard.test.settings No local_settings file found. Traceback (most recent call last): File "/home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py", line 23, in execute_from_command_line(sys.argv) [ ... some stack dump ...] File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line 147, in acquire self._do_open() File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line 119, in _do_open self.lockfile = open(self.path, 'a') PermissionError: [Errno 13] Permission denied: '/usr/lib/python3/dist-packages/openstack_dashboard/local/_usr_lib_python3_dist-packages_openstack_dashboard_local_.secret_key_store.lock' What thing is attempting to write in my read only /usr, while Horizon is correctly installed, and writing its secret key material as it should, in /var/lib/openstack-dashboard? It's probably due to me, but here, how can I make heat-dashboard unit test behave during package build? What's this "No local_settings file found" thing? Other dashboards didn't complain in this way... Cheers, Thomas Goirand (zigo) From thingee at gmail.com Wed Feb 21 15:55:19 2018 From: thingee at gmail.com (Mike Perez) Date: Wed, 21 Feb 2018 07:55:19 -0800 Subject: [openstack-dev] [ptg] Lightning talks In-Reply-To: <20180212153634.GG14568@gmail.com> References: <20180208002535.GA14568@gmail.com> <20180212153634.GG14568@gmail.com> Message-ID: <20180221155519.GA32596@gmail.com> I would like to extend the deadline for Lightning Talks at the PTG to February 23rd 23:59 UTC to fill in the few more slots we have available. Details quoted below, thanks! -- Mike Perez (thingee) On 02:36 Feb 13, Mike Perez wrote: > On 11:25 Feb 08, Mike Perez wrote: > > Hey all! > > > > I'm looking for six 5-minute lightning talks for the PTG in Dublin. This will > > be on Friday March 2nd at 13:00-13:30 local time. > > > > Appropriate 5 minute talk examples: > > * Neat features in libraries like oslo that we should consider adopting in our > > community wide goals. > > * Features and tricks in your favorite editor that makes doing work easier. > > * Infra tools that maybe not a lot of people know about yet. Zuul v3 explained > > in five minutes anyone? > > * Some potential API specification from the API SIG that we should adopt as > > a community wide goal. > > > > Please email me DIRECTLY the following information: > > > > Title: > > Speaker(s) full name: > > Abstract: > > Link to presentation or attachment if you have it already. Laptop on stage will > > be loaded with your presentation already. I'll have open office available so > > odp, odg, otp, pdf, limited ppt format support. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ifat.afek at nokia.com Wed Feb 21 16:18:18 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 21 Feb 2018 16:18:18 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Paul, I suggest that you do the following: · Add a LOG message at the end of _get_alarms to print all alarms that are returned by this function · Restart vitrage-graph and send me its log. I’d like to see if there is any difference between the alarm that is raised and the alarm that is deleted. Thanks, Ifat. From: Paul Vaduva Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 21 February 2018 at 16:30 To: "OpenStack Development Mailing List (not for usage questions)" Cc: Ciprian Barbu Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior I attached also the driver.py that I am using. From: Paul Vaduva [mailto:Paul.Vaduva at enea.com] Sent: Wednesday, February 21, 2018 3:22 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: [Attachment removed] Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Feb 21 16:34:47 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 21 Feb 2018 08:34:47 -0800 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: FYI, Octavia has started to use the new devstack-tempest parent here: https://review.openstack.org/#/c/543034/17/zuul.d/jobs.yaml There is a lot of work still left to do on our tempest-plugin but we are making progress. Thanks for the communication out! Michael On Tue, Feb 20, 2018 at 1:22 PM, Andrea Frittoli wrote: > Dear all, > > updates: > > - host/group vars: zuul now supports declaring host and group vars in the > job definition [0][1] - thanks corvus and infra team! > This is a great help towards writing the devstack and tempest base > multinode jobs [2][3] > * NOTE: zuul merges dict variables through job inheritance. Variables in > host/group_vars override global ones. I will write some examples further > clarify this. > > - stable/pike: devstack ansible changes have been backported to stable/pike, > so we can now run zuulv3 jobs against stable/pike too - thank you tosky! > next change in progress related to pike is to provide tempest-full-pike > for branchless repositories [4] > > - documentation: devstack now publishes documentation on its ansible roles > [5]. > More devstack documentation patches are in progress to provide jobs > reference, examples and a job migration how-to [6]. > > > Andrea Frittoli (andreaf) > > [0] > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host_vars > [1] > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.group_vars > [2] https://review.openstack.org/#/c/545696/ > [3] https://review.openstack.org/#/c/545724/ > [4] https://review.openstack.org/#/c/546196/ > [5] https://docs.openstack.org/devstack/latest/roles.html > [6] https://review.openstack.org/#/c/545992/ > > > On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli > wrote: >> >> Dear all, >> >> updates: >> - tempest-full-queens and tempest-full-py3-queens are now available for >> testing of branchless repositories [0]. They are used for tempest and >> devstack-gate. If you own a tempest plugin in a branchless repo, you may >> consider adding similar jobs to your plugin if you use it for tests on >> stable/queen as well. >> - if you have migrated jobs based on devstack-tempest please let me know, >> I'm building reference docs and I'd like to include as many examples as >> possible >> - work on multi-node is in progress, but not ready still - you can follow >> the patches in the multinode branch [1] >> - updates on some of the points from my previous email are inline below >> >> Andrea Frittoli (andreaf) >> >> [0] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n73 >> [1] >> https://review.openstack.org/#/q/status:open++branch:master+topic:multinode >> >> >> On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli >> wrote: >>> >>> Dear all, >>> >>> this is the first or a series of ~regular updates on the migration of >>> Tempest / Grenade jobs to Zuul v3 native. >>> >>> The QA team together with the infra team are working on providing the >>> OpenStack community with a set of base Tempest / Grenade jobs that can be >>> used as a basis to write new CI jobs / migrate existing legacy ones with a >>> minimal effort and very little or no Ansible knowledge as a precondition. >>> >>> The effort is tracked in an etherpad [0]; I'm trying to keep the etherpad >>> up to date but it may not always be a source of truth. >>> >>> Useful jobs available so far: >>> - devstack-tempest [0] is a simple tempest/devstack job that runs >>> keystone glance nova cinder neutron swift and tempest *smoke* filter >>> - tempest-full [1] is similar but runs a full test run - it replaces the >>> legacy tempest-dsvm-neutron-full from the integrated gate >>> - tempest-full-py3 [2] runs a full test run on python3 - it replaces the >>> legacy tempest-dsvm-py35 >> >> >> Some more details on this topic: what I did not mention in my previous >> email is that the autogenerated Tempest / Grenade CI jobs (legacy-* >> playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To >> create Zuul V3 Tempest / Grenade native jobs for your projects you need to >> through away the legacy playbooks and defined new jobs in .zuul.yaml, as >> documented in the zuul v3 docs [2]. >> The parent job for a single node Tempest job will usually be >> devstack-tempest. Example migrated jobs are avilable, for instance: [3] [4]. >> >> [2] >> https://docs.openstack.org/infra/manual/zuulv3.html#howto-update-legacy-jobs >> [3] >> http://git.openstack.org/cgit/openstack/sahara-tests/tree/.zuul.yaml#n21 >> [4] https://review.openstack.org/#/c/543048/5 >> >>> >>> >>> Both tempest-full and tempest-full-py3 are part of integrated-gate >>> templates, starting from stable/queens on. >>> The other stable branches still run the legacy jobs, since devstack >>> ansible changes have not been backported (yet). If we do backport it will be >>> up to pike maximum. >>> >>> Those jobs work in single node mode only at the moment. Enabling >>> multinode via job configuration only require a new Zuul feature [4][5] that >>> should be available soon; the new feature allows defining host/group >>> variables in the job definition, which means setting variables which are >>> specific to one host or a group of hosts. >>> Multinode DVR and Ironic jobs will require migration of the ovs-* roles >>> form devstack-gate to devstack as well. >>> >>> Grenade jobs (single and multinode) are still legacy, even if the >>> *legacy* word has been removed from the name. >>> They are currently temporarily hosted in the neutron repository. They are >>> going to be implemented as Zuul v3 native in the grenade repository. >>> >>> Roles are documented, and a couple of migration tips for DEVSTACK_GATE >>> flags is available in the etherpad [0]; more comprehensive examples / docs >>> will be available as soon as possible. >>> >>> Please let me know if you find this update useful and / or if you would >>> like to see different information in it. >>> I will send further updates as soon as significant changes / new features >>> become available. >>> >>> Andrea Frittoli (andreaf) >>> >>> [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs >>> [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 >>> [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 >>> [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 >>> [4] https://etherpad.openstack.org/p/zuulv3-group-variables >>> [5] https://review.openstack.org/#/c/544562/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pkovar at redhat.com Wed Feb 21 16:44:14 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 21 Feb 2018 17:44:14 +0100 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-02-21 In-Reply-To: <20180221131425.ef2a6bb0b7a585ba95d56306@redhat.com> References: <20180221131425.ef2a6bb0b7a585ba95d56306@redhat.com> Message-ID: <20180221174414.f3f7067de8dd49a404df48c2@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:01:11 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-02-21-16.01.log.html . Meeting summary --------------- * Rocky PTG (pkovar, 16:05:12) * Planning etherpad for docs+i18n available (pkovar, 16:05:17) * LINK: https://etherpad.openstack.org/p/docs-i18n-ptg-rocky (pkovar, 16:05:23) * Sign up and tell us your ideas on what to discuss in the docs room (pkovar, 16:05:28) * pkovar planning on organizing project docs helproom at the ptg, will send a separate email about it (pkovar, 16:06:23) * similarly to last year, the help room days to correlate with project days to allow project teams to work with docs people (pkovar, 16:07:08) * Vancouver Summit (pkovar, 16:09:04) * Looking to have a shared 10+10 mins project update slot with i18n (pkovar, 16:09:09) * Looking for interested (co-)speakers (pkovar, 16:09:15) * Bug Triage Team (pkovar, 16:11:28) * you can help the docs team with bug triaging by signing up (pkovar, 16:12:30) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:12:38) * Open discussion (pkovar, 16:13:18) * see you at the ptg next week! (pkovar, 16:13:32) Meeting ended at 16:20:26 UTC. People present (lines said) --------------------------- * pkovar (20) * openstack (3) * openstackgerrit (1) Generated by `MeetBot`_ 0.1.4 From kendall at openstack.org Wed Feb 21 16:50:00 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 21 Feb 2018 10:50:00 -0600 Subject: [openstack-dev] Rocky Project Teams Gathering Details - Dublin February 26 - March 2 Message-ID: <7817DFE7-216A-44BB-8DD3-B8EC4D6B69B0@openstack.org> Thank You to our Sponsors! Huge thanks to all of our sponsors for their support and contributions making the event possible: Fujitsu, Huawei, and Red Hat! Quick Links: http://ptg.openstack.org Venue: Croke Park (Jones' Rd, Drumcondra, Dublin 3, Ireland) IRC Channel: #openstack-ptg WIFI SSID: OpenStack PTG Password: RockyPTG2018 Evening Event Schedule Unofficial PTG Kick Off Happy Hour & Registration Croke Park Hotel- Sideline Bar Sunday 4:00-7:00pm Pick up your badge early and receive a 10% discount on all food items with your PTG badge! Board of Directors Meet & Greet Happy Hour Croke Park Hotel- Sideline Bar (Terrace Room) Monday 5:00-7:00pm Come hang out with the OpenStack Board of Directors! First round of drinks and snacks provided. Official PTG Networking Reception Croke Park- GAA Museum Tuesday 5:00-7:00pm Join us at the Gaelic Athletic Association Museum located on the 1st Floor of Croke Park Stadium via the Cusack Stand for drinks, snacks and and free access to the Museum . Stacker Sing-A-Long Croke Park Hotel- Sideline Bar (Terrace Room) Tuesday 8:00-11:30pm Join us after dinner for a night of singing and dancing featuring our own Jonathan Bryce on the piano! Women of OpenStack Meet-Up Wallace’s Asti- 15 Russell Street (two blocks south of Croke Park) Wednesday 5:00-7:00pm Feedback Session Thursday 5:00pm Croke Park Stadium - Hogan Suite, Level 5 Come to give your feedback on the PTG experience, or to ask any general questions to the Foundation. Stacker Family Game Night Croke Park Hotel- Sideline Bar (Terrace Room) Thursday 8:30-11:30pm Grab your favorite game and join your fellow stackers at the Croke Park Hotel. Check out the current list of games being provided here . Registration Hours Registration is located near the elevators on the 5th floor and will be open the following hours: Sunday: 4:00 - 7:00pm (at Croke Park Hotel) Monday - Thursday: 8:00am - 5:00pm Friday: 8:00am - 1:00pm Opening Hours Officially, the PTG (and coffee service) runs everyday from 9am to 5pm. However, you’ll have access to rooms from 8:30am - 6pm. Please see attached schedule and map, which will be printed and distributed to all attendees at Registration. Lunch Lunch will be served daily from 12:30-1:30pm in the Hogan Suite on Level 5. Dynamic Scheduling We have scheduled tracks preassigned to rooms and days, but also a number of rooms and timeslots available for unscheduled tracks and discussion topics to take advantage of throughout the week. The following teams will need to schedule themselves via the ptgbot: Dragonflow, OpenStackClient, Puppet OpenStack, Rally, Release Management, Requirements, Shade / OpenStack SDK, Stable branch maintenance, and Winstackers. If you don't see your project on the attached schedule, you can dynamically schedule it in reservable space using the ptgbot. Ptgbot schedule, instructions and other resources are available here . City Guide Need suggestions for a nearby coffee shop, Dublin must-do’s, or a group-friendly dinner spot? Check out the city guide in the wiki, and add your own recommendations to it as you explore Dublin throughout the week! Join the Conversation We recommend joining the event IRC channel at #openstack-ptg, as we'll be using it for coordination. You can also find on the wiki a list of the Etherpads used by the teams during the week. Vancouver Summit Registration Code At the end of the week, we will be distributing Vancouver Summit registration codes to all checked-in Dublin PTG attendees. If you have any questions regarding Summit registration, please email summitreg at openstack.org . Dongles & Adapters If you plan to project and need anything besides HDMI, please plan to bring your own adapter/dongle. All power outlets will be type G so pack an adaptor if your power cords are anything other than type G. Questions or issues? Email ptg at openstack.org or visit the Registration desk onsite (Level 5) during the hours listed above. See you in Dublin! Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PTG_Dulin_Schedule.pdf Type: application/pdf Size: 1790636 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Dublin_map.pdf Type: application/pdf Size: 1717018 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Wed Feb 21 16:54:28 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Wed, 21 Feb 2018 11:54:28 -0500 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Message-ID: On Wed, Feb 21, 2018 at 9:35 AM, Thomas Goirand wrote: > Hi there! > > I'm having big trouble package heat-dashboard for Debian. I hope I can > get help through this list. > > In here: > > heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/ > js/ > > we have minified *only* versions of Javascript. > > There's also a bug open for this: https://bugs.launchpad.net/heat-dashboard/+bug/1747687 Regards, Corey > 1/ Why is there only minified versions? That's non-free to me, Debian, > and probably any other distro caring about OpenStack. > 2/ Why do we even have a folder called "vendors"? Doesn't this sound > really a bad practice? > 3/ Why is there so many angular-*.min.js files? Do we need them all? > 4/ Why isn't the package using xstatic-angular and friends? > > As it stands, I can't upload heat-dashboard to Debian for Queens, and > it's been removed from Horizon... :( > > Oh, and I almost forgot! When running unit tests, I get: > > PYTHON=python$i NOSE_WITH_OPENSTACK=1 \ > NOSE_OPENSTACK_COLOR=1 \ > NOSE_OPENSTACK_RED=0.05 \ > NOSE_OPENSTACK_YELLOW=0.025 \ > NOSE_OPENSTACK_SHOW_ELAPSED=1 \ > DJANGO_SETTINGS_MODULE=heat_dashboard.test.settings \ > python$i > /home/zigo/sources/openstack/queens/services/heat- > dashboard/build-area/heat-dashboard-1.0.2/manage.py > test heat_dashboard.test --settings=heat_dashboard.test.settings > > No local_settings file found. > Traceback (most recent call last): > File > "/home/zigo/sources/openstack/queens/services/heat- > dashboard/build-area/heat-dashboard-1.0.2/manage.py", > line 23, in > execute_from_command_line(sys.argv) > > [ ... some stack dump ...] > > File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line > 147, in acquire > self._do_open() > File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line > 119, in _do_open > self.lockfile = open(self.path, 'a') > PermissionError: [Errno 13] Permission denied: > '/usr/lib/python3/dist-packages/openstack_dashboard/ > local/_usr_lib_python3_dist-packages_openstack_dashboard_ > local_.secret_key_store.lock' > > What thing is attempting to write in my read only /usr, while Horizon is > correctly installed, and writing its secret key material as it should, > in /var/lib/openstack-dashboard? It's probably due to me, but here, how > can I make heat-dashboard unit test behave during package build? What's > this "No local_settings file found" thing? Other dashboards didn't > complain in this way... > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Feb 21 16:57:44 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Feb 2018 10:57:44 -0600 Subject: [openstack-dev] [Neutron][L3-subteam] Weekly IRC meeting cancelled on February 22nd and March 1st Message-ID: Dear L3 sub-team members, Due to the PTG next week and the involved traveling to get to Dublin, we will cancel our weekly meetings on February 22nd and March 1st. We will resume our meetings normally on March 8th at 1500UTC. Safe travels Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Feb 21 17:01:08 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Feb 2018 11:01:08 -0600 Subject: [openstack-dev] [Neutron][Drivers] Weekly IRC meeting cancelled on February 22nd and March 2nd Message-ID: Dear Neutron Drivers, Due to the PTG next week and the involved traveling to get to Dublin, we will cancel our weekly meetings on February 22nd and March 2nd. We will resume our meetings normally on March 8th at 2200UTC. Safe travels Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Feb 21 17:07:48 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Feb 2018 11:07:48 -0600 Subject: [openstack-dev] [Neutron] Weekly IRC meeting canceled on February 27th Message-ID: Dear Neutrinos, Due to the PTG in Dublin, we will cancel our weekly meeting on Tuesday February 27th. We will resume them normally on March 5th 2100 UTC Safe travels Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Vaduva at enea.com Wed Feb 21 17:11:30 2018 From: Paul.Vaduva at enea.com (Paul Vaduva) Date: Wed, 21 Feb 2018 17:11:30 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Ifat, Link to cuted log version https://hastebin.com/upokifinuq.py Plus full graph.log attached plus code for driver.py with Logging modifications Thanks, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 21, 2018 6:18 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I suggest that you do the following: · Add a LOG message at the end of _get_alarms to print all alarms that are returned by this function · Restart vitrage-graph and send me its log. I’d like to see if there is any difference between the alarm that is raised and the alarm that is deleted. Thanks, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 21 February 2018 at 16:30 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior I attached also the driver.py that I am using. From: Paul Vaduva [mailto:Paul.Vaduva at enea.com] Sent: Wednesday, February 21, 2018 3:22 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: [Attachment removed] Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graph.log Type: application/octet-stream Size: 1494252 bytes Desc: graph.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: driver.py Type: application/octet-stream Size: 5673 bytes Desc: driver.py URL: From amotoki at gmail.com Wed Feb 21 17:38:37 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 22 Feb 2018 02:38:37 +0900 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Message-ID: 2018-02-21 23:35 GMT+09:00 Thomas Goirand : > Hi there! > > I'm having big trouble package heat-dashboard for Debian. I hope I can > get help through this list. > > In here: > > heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/js/ > > we have minified *only* versions of Javascript. > > 1/ Why is there only minified versions? That's non-free to me, Debian, > and probably any other distro caring about OpenStack. > 2/ Why do we even have a folder called "vendors"? Doesn't this sound > really a bad practice? > 3/ Why is there so many angular-*.min.js files? Do we need them all? > 4/ Why isn't the package using xstatic-angular and friends? IIUC, these javascript files are only used by the template generator which was newly added after split out from horizon. If you can provide only the Pike-compatible feature from, horizon, code related to the template generator can be excluded. I know it is not ideal and am not sure this is acceptable workaround. Horizon docs contains a guideline on javascript and it discourages embedded JS files. https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html heat-dashboard choice does not look the right way. Akihiro > > As it stands, I can't upload heat-dashboard to Debian for Queens, and > it's been removed from Horizon... :( > > Oh, and I almost forgot! When running unit tests, I get: > > PYTHON=python$i NOSE_WITH_OPENSTACK=1 \ > NOSE_OPENSTACK_COLOR=1 \ > NOSE_OPENSTACK_RED=0.05 \ > NOSE_OPENSTACK_YELLOW=0.025 \ > NOSE_OPENSTACK_SHOW_ELAPSED=1 \ > DJANGO_SETTINGS_MODULE=heat_dashboard.test.settings \ > python$i > /home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py > test heat_dashboard.test --settings=heat_dashboard.test.settings > > No local_settings file found. > Traceback (most recent call last): > File > "/home/zigo/sources/openstack/queens/services/heat-dashboard/build-area/heat-dashboard-1.0.2/manage.py", > line 23, in > execute_from_command_line(sys.argv) > > [ ... some stack dump ...] > > File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line > 147, in acquire > self._do_open() > File "/usr/lib/python3/dist-packages/fasteners/process_lock.py", line > 119, in _do_open > self.lockfile = open(self.path, 'a') > PermissionError: [Errno 13] Permission denied: > '/usr/lib/python3/dist-packages/openstack_dashboard/local/_usr_lib_python3_dist-packages_openstack_dashboard_local_.secret_key_store.lock' > > What thing is attempting to write in my read only /usr, while Horizon is > correctly installed, and writing its secret key material as it should, > in /var/lib/openstack-dashboard? It's probably due to me, but here, how > can I make heat-dashboard unit test behave during package build? What's > this "No local_settings file found" thing? Other dashboards didn't > complain in this way... > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From miguel at mlavalle.com Wed Feb 21 17:57:41 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 21 Feb 2018 11:57:41 -0600 Subject: [openstack-dev] [Neutron] Queens retrospective Message-ID: Hi Neutrinos and broader OpenStack community, I have started an etherpad to collect your thoughts on the retrospective for the Queens cycle: https://etherpad.openstack.org/p/neutron-queens-retrospective. Everyone is welcome to provide feedback so we can improve our contribution to the community in the future. The retrospective session is scheduled for Wednesday 28th from 9 to 10 Looking forward to see you all in Dublin Safe travels Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Feb 21 18:18:08 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 21 Feb 2018 19:18:08 +0100 Subject: [openstack-dev] [docs] About the convention to use '.' instead of 'source'. In-Reply-To: <1518986610-sup-9087@lrrr.local> References: <20180217210312.mv43be7re73vac2i@yuggoth.org> <373c2c5c-6d39-59f2-96f5-5fe9dbbb6364@inaugust.com> <20180218160151.4m6yzuvd7pdq7c2c@yuggoth.org> <1518986610-sup-9087@lrrr.local> Message-ID: <20180221191808.544ad131f3f7a170733bd87f@redhat.com> On Sun, 18 Feb 2018 15:44:04 -0500 Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-02-18 16:01:52 +0000: > > On 2018-02-18 03:55:51 -0600 (-0600), Monty Taylor wrote: > > [...] > > > I'd honestly argue in favor of assuming bash and using 'source' > > > because it's more readable. We don't make allowances for alternate > > > shells in our examples anyway. > > > > > > I personally try to use 'source' vs . and $() vs. `` as > > > aggressively as I can. > > > > > > That said - I completely agree with fungi on the description of > > > the tradeoffs of each direction, and I do think it's valuable to > > > pick one for the docs. > > > > Yes, it's not my call but I too would prefer more readable examples > > over a strict adherence to POSIX. As long as we say somewhere that > > our examples assume the user is in a GNU bash(1) environment and > > that the examples may require minor adjustment for other shells, I > > think that's a perfectly reasonable approach. If there's a > > documentation style guide, that too would be a great place to > > encourage examples following certain conventions such as source > > instead of ., $() instead of ``, [] instead of test, an so on... and > > provide a place to explain the rationale so that reviewers have a > > convenient response they can link for bulk "improvements" which seem > > to indicate ignorance of our reasons for these choices. > > I've proposed reverting the style-guide change that seems to have led to > this discussion in https://review.openstack.org/#/c/545718/2 FYI, we've just approved this. Thanks, pk From kennelson11 at gmail.com Wed Feb 21 19:38:50 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 21 Feb 2018 19:38:50 +0000 Subject: [openstack-dev] [PTG] StoryBoard Discussion Planning Message-ID: Hello Everyone! We have tentatively scheduled StoryBoard discussions Wednesday morning and will officialize it in the PTGbot soon. If there are specific things you wish to discuss please add them to our planning etherpad[1]! If you think you might attend, please also add your name to the list so we know you're interested. See you all next week! -Kendall (diablo_rojo) [1] https://etherpad.openstack.org/p/StoryBoard-Rocky-PTG -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Wed Feb 21 19:41:47 2018 From: chris at openstack.org (Chris Hoge) Date: Wed, 21 Feb 2018 11:41:47 -0800 Subject: [openstack-dev] [k8s] SIG-K8s Scheduling for Dublin PTG Message-ID: <2C2B8E52-0F61-459D-93B7-541BC3B054C3@openstack.org> SIG-K8s has a planning etherpad available for the Dublin PTG. We have space scheduled for Tuesday, with approximately eight forty-minute work blocks. For the K8s on OpenStack side of things, we've identified a core set of priorities that we'll be working on that day, including: * Moving openstack-cloud-controller-manager into OpenStack git repo. * Enabling and improving testing across multiple platforms. * Identifying documentation gaps. Some of these items have some collaboration points with the Infra and QA teams. If members of those teams could help us identify when they would be available to work on repository creation and enabling testing, that would help us to schedule the appropriate times for those topics. The work of the SIG-K8s groups also covers other Kubernetes and OpenStack integrations, including deploying OpenStack on top of Kubernetes. If anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun teams would like to schedule cross-project work sessions, please add your requests and preferred times to the planning etherpad. Additionally, I can be available to attend work sessions for any of those projects. https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg Thanks! Chris From amoralej at redhat.com Wed Feb 21 19:53:09 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 21 Feb 2018 20:53:09 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: <20180219232420.GB23143@thor.bakeyournoodle.com> References: <20180218003536.GY23143@thor.bakeyournoodle.com> <20180219232420.GB23143@thor.bakeyournoodle.com> Message-ID: On Tue, Feb 20, 2018 at 12:24 AM, Tony Breeds wrote: > On Mon, Feb 19, 2018 at 06:10:56PM +0100, Alfredo Moralejo Alonso wrote: > > > Recently, we have added a job in post pipeline for openstack/requirements > > in https://review.rdoproject.org to > > automatically post updates in RDO dependencies repo when changes are > > detected in upper-constraints. This > > job will try to automatically update the dependencies when possible or > > notify to take required manual actions > > in some cases. > > > > I expect this will improve dependencies management in RDO in next > releases. > > That's cool. Can you point me at how that's done? I'm not sure how > you'd automate the builds but that's probably just lack of imagination > on my part ;P > > My fault for not having the documentation ready yet, I will share when ready. Short version is: 1. We generate a review to rdoinfo (RDO's package database) every time a change is detected in upper-constraints.txt proposing it as candidate in dependencies repo. 2. A job in rdoinfo gate detects if the required version is available in fedora. If so, it tries to rebuild it for CentOS and add it to CentOS dependencies repo. If the version is not available in fedora or can not be rebuilt for CentOS, the review fails in gate and a manual action is required. 3. If the dependency can be rebuilt from fedora, a change is proposed to promote it to the testing phase (the one used in upstream gate jobs for master). A set of jobs deploying OpenStack with packstack, puppet-openstack-integration and tripleo are executed to gate the dependency update. When the review is merged, the new or updated dependency is pushed to the RDO repo. If you are interested i can discuss the implementation details. Best regards, Alfredo > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 21 20:36:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 21 Feb 2018 14:36:06 -0600 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline In-Reply-To: References: <20180219154429.GA2110@sm-xps> Message-ID: <20180221203606.GA306@sm-xps> One more reminder that tomorrow, *March* 22, is the deadline for any additional RCs. There are a few repos that have merged commits, so if you are one of those and plan on having those changes as part of Queens, please propose a new RC release within ~24 hours. This is also the final release deadline for cycle-with-intermediary projects. Sean On Mon, Feb 19, 2018 at 11:50:21AM -0500, David Moreau Simard wrote: > On Mon, Feb 19, 2018 at 10:44 AM, Sean McGinnis wrote: > > Hey everyone, > > > > Just a quick reminder that Thursday, 22 March, is the deadline for any final > > Queens release candidates. After this point we will enter a quiet period for a > > week in preparation of tagging the final Queens release during the PTG week. > > February, right ? > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Wed Feb 21 20:37:14 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Feb 2018 12:37:14 -0800 Subject: [openstack-dev] [nova] queens retrospective etherpad Message-ID: <2D90A554-9C74-4E3F-A7C1-53E8E071F137@gmail.com> Greetings Stackers, FYI we also have a retrospective etherpad [1] for Queens which we’ll cover at the PTG on Wednesday at 9:00 - 10:00 AM. Please add your thoughts to the etherpad on what you think went well in Queens and what you think could use some improvement going forward into the Rocky cycle, so we can discuss them and identify actionable things we can do to make things better in Rocky. Thanks, -melanie [1] https://etherpad.openstack.org/p/nova-queens-retrospective From zigo at debian.org Wed Feb 21 20:41:03 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 21 Feb 2018 21:41:03 +0100 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Message-ID: On 02/21/2018 05:54 PM, Corey Bryant wrote: > > > On Wed, Feb 21, 2018 at 9:35 AM, Thomas Goirand > wrote: > > Hi there! > > I'm having big trouble package heat-dashboard for Debian. I hope I can > get help through this list. > > In here: > > heat_dashboard/static/dashboard/project/heat_dashboard/template_generator/js/ > > we have minified *only* versions of Javascript. > > > There's also a bug open for this: > https://bugs.launchpad.net/heat-dashboard/+bug/1747687 > > Regards, > Corey Thanks for the link and filing this bug. Cheers, Thomas Goirand (zigo) From andrea.frittoli at gmail.com Wed Feb 21 20:43:17 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 21 Feb 2018 20:43:17 +0000 Subject: [openstack-dev] [k8s] SIG-K8s Scheduling for Dublin PTG In-Reply-To: <2C2B8E52-0F61-459D-93B7-541BC3B054C3@openstack.org> References: <2C2B8E52-0F61-459D-93B7-541BC3B054C3@openstack.org> Message-ID: On Wed, Feb 21, 2018 at 7:41 PM Chris Hoge wrote: > SIG-K8s has a planning etherpad available for the Dublin PTG. We have > space scheduled for Tuesday, with approximately eight forty-minute work > blocks. For the K8s on OpenStack side of things, we've identified a core > set of priorities that we'll be working on that day, including: > > * Moving openstack-cloud-controller-manager into OpenStack git repo. > * Enabling and improving testing across multiple platforms. > * Identifying documentation gaps. > > Some of these items have some collaboration points with the Infra and > QA teams. If members of those teams could help us identify when they > would be available to work on repository creation and enabling testing, > that would help us to schedule the appropriate times for those topics. > I'm interested in participating. It's a bit unfortunate that Tuesday overlaps with the QA/infra help room, but as long as the current discussion is published via PTG bot I should be able to switch rooms when needed. Andrea Frittoli (andreaf) > > The work of the SIG-K8s groups also covers other Kubernetes and OpenStack > integrations, including deploying OpenStack on top of Kubernetes. If > anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun > teams would like to schedule cross-project work sessions, please add your > requests and preferred times to the planning etherpad. Additionally, I > can be available to attend work sessions for any of those projects. > > https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg > > Thanks! > Chris > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Feb 21 20:50:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 21 Feb 2018 14:50:54 -0600 Subject: [openstack-dev] [nova] Contrail VIF TAP plugging broken In-Reply-To: References: Message-ID: <3a33a37e-d329-cde3-9551-2afff09842ea@gmail.com> On 2/21/2018 4:30 AM, Édouard Thuleau wrote: > Hi Seán, Michael, > > Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to > plug TAP on the Contrail software switch (named vrouter) [2]. I proposed > a fix in the beginning of the year [3] but it still pending approval > even it got a couple of +1 and no negative feedback. It's why I'm > writing that email to get your attention. > That issue appeared during the Queens development cycle and we need to > fix that before it was released (hope we are not to late). > Contrail already started to move on os-vif driver [4]. A first VIF type > driver is there for DPDK case [5], we plan to do the same for the TAP > case in the R release and remove the Nova VIF plugging code for the vrouter. > > [1] https://review.openstack.org/#/c/515916/ > [2] https://bugs.launchpad.net/nova/+bug/1742963 > [3] https://review.openstack.org/#/c/533212/ > [4] https://github.com/Juniper/contrail-nova-vif-driver > [5] https://review.openstack.org/#/c/441183/ > > Regards, > Édouard. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Approved the change on master and working on the backport to stable/queens. We'll be cutting an RC3 tomorrow so I'll make sure this gets into that. -- Thanks, Matt From Louie.Kwan at windriver.com Wed Feb 21 21:19:42 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Wed, 21 Feb 2018 21:19:42 +0000 Subject: [openstack-dev] [libvrit] Can QEMU or LIBVIRT know VM is powering-off Message-ID: <47EFB32CD8770A4D9590812EE28C977E962536C6@ALA-MBD.corp.ad.wrs.com> When turning off a VM by doing nova stop, the Status and Task State is there for Nova. But can Libvirt / qemu programmatically figure out the 'Task State' that the VM is trying to powering-off ?. For libvirt, it seems only know the "Power State"? Anyway to read the "powering-off" info? +--------------------------------------+------+--------+--------------+-------------+--------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+--------------+-------------+--------------------------------+ | 09d65498-b1fe-4a99-9f43-4c365a79ff36 | c1 | ACTIVE | - | Running | public=172.24.4.6, 2001:db8::3 | | 565da9ba-3c0c-4087-83ca-32a5a1b00a55 | iim1 | ACTIVE | powering-off | Running | public=172.24.4.5, 2001:db8::7 | +--------------------------------------+------+--------+--------------+-------------+--------------------------------+ +--------------------------------------+------+---------+------------+-------------+--------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+------------+-------------+--------------------------------+ | 09d65498-b1fe-4a99-9f43-4c365a79ff36 | c1 | ACTIVE | - | Running | public=172.24.4.6, 2001:db8::3 | | 565da9ba-3c0c-4087-83ca-32a5a1b00a55 | iim1 | SHUTOFF | - | Shutdown | public=172.24.4.5, 2001:db8::7 | Thanks. Louie -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Wed Feb 21 21:28:18 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Wed, 21 Feb 2018 21:28:18 -0000 Subject: [openstack-dev] [cinder] cinder 12.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for cinder for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/cinder/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/queens Release notes for cinder can be found at: http://docs.openstack.org/releasenotes/cinder/ From sean.mcginnis at gmx.com Wed Feb 21 21:38:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 21 Feb 2018 15:38:48 -0600 Subject: [openstack-dev] [release][ptl] Final Queens RC Deadline In-Reply-To: <20180221203606.GA306@sm-xps> References: <20180221203606.GA306@sm-xps> Message-ID: February!! I meant February!! Gaaaaaah! On Feb 21 2018, at 2:36 pm, Sean McGinnis wrote: > One more reminder that tomorrow, *March* 22, is the deadline for any additional > RCs. There are a few repos that have merged commits, so if you are one of those > and plan on having those changes as part of Queens, please propose a new RC > release within ~24 hours. > > This is also the final release deadline for cycle-with-intermediary projects. > > Sean > > On Mon, Feb 19, 2018 at 11:50:21AM -0500, David Moreau Simard wrote: > > On Mon, Feb 19, 2018 at 10:44 AM, Sean McGinnis wrote: > > > Hey everyone, > > > > > > Just a quick reminder that Thursday, 22 March, is the deadline for any final > > > Queens release candidates. After this point we will enter a quiet period for a > > > week in preparation of tagging the final Queens release during the PTG week. > > > > February, right ? > > > > David Moreau Simard > > Senior Software Engineer | OpenStack RDO > > > > dmsimard = [irc, github, twitter] > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Feb 21 23:06:17 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 22 Feb 2018 10:06:17 +1100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: References: <20180218003536.GY23143@thor.bakeyournoodle.com> <20180219232420.GB23143@thor.bakeyournoodle.com> Message-ID: <20180221230616.GA11131@thor.bakeyournoodle.com> On Wed, Feb 21, 2018 at 08:53:09PM +0100, Alfredo Moralejo Alonso wrote: > Short version is: > > 1. We generate a review to rdoinfo (RDO's package database) every time a > change is detected in upper-constraints.txt proposing it as candidate in > dependencies repo. > 2. A job in rdoinfo gate detects if the required version is available in > fedora. If so, it tries to rebuild it for CentOS and add it to CentOS > dependencies repo. If the version is not available in fedora or can not be > rebuilt for CentOS, the review fails in gate and a manual action is > required. > 3. If the dependency can be rebuilt from fedora, a change is proposed to > promote it to the testing phase (the one used in upstream gate jobs for > master). A set of jobs deploying OpenStack with packstack, > puppet-openstack-integration and tripleo are executed to gate the > dependency update. When the review is merged, the new or updated dependency > is pushed to the RDO repo. See it was just my lack of imagination. I guess I was getting hung up on step 2 but the fall back to manual action is pretty reasonable there. It'll be interesting to see how often that happens, and also what happens in the scenario where the RDO package has diverged slightly from Fedora. > If you are interested i can discuss the implementation details. Yeah I'd very much like to grab 10-15 mins of your time next week, if you're cool with that. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From chris.friesen at windriver.com Wed Feb 21 23:43:00 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 21 Feb 2018 17:43:00 -0600 Subject: [openstack-dev] [libvrit] Can QEMU or LIBVIRT know VM is powering-off In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E962536C6@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E962536C6@ALA-MBD.corp.ad.wrs.com> Message-ID: <5A8E0404.9010305@windriver.com> On 02/21/2018 03:19 PM, Kwan, Louie wrote: > When turning off a VM by doing nova stop, the Status and Task State is there > for Nova. But can Libvirt / qemu programmatically figure out the ‘Task State’ > that the VM is trying to powering-off ?. > > For libvirt, it seems only know the “Power State”? Anyway to read the > “powering-off” info? The fact that you have asked nova to power off the instance means nothing to libvirt/qemu. In the "nova stop" case nova will do some housekeeping stuff, optionally tell libvirt to shutdown the domain cleanly, then tell libvirt to destroy the domain, then do more housekeeping stuff. Chris From kennelson11 at gmail.com Wed Feb 21 23:48:43 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 21 Feb 2018 23:48:43 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Hello Everyone! I just wanted to remind you all that you have till *Monday Feburary 26th* to sign up if your team or group is interested in a team photo on the Croke Park pitch! We still have slots available Tuesday afternoon and Thursday morning. -Kendall (diablo_rojo) On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson wrote: > This link might work better for everyone: > > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > -Kendall (diablo_rojo) > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > wrote: > >> Hello PTLs and SIG Chairs! >> >> So here's the deal, we have 50 spots that are first come, first >> served. We have slots available before and after lunch both Tuesday and >> Thursday. >> >> The google sheet here[1] should be set up so you have access to edit, but >> if you can't for some reason just reply directly to me and I can add your >> team to the list (I need team/sig name and contact email). >> >> I will be locking the google sheet on *Monday February 26th so I need to >> know if your team is interested by then. * >> >> See you soon! >> >> - Kendall Nelson (diablo_rojo) >> >> [1] >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 22 00:17:13 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 22 Feb 2018 00:17:13 +0000 Subject: [openstack-dev] Vancouver Community Contrbutor Awards Message-ID: Hello Everyone :) While its still a few months away from the Vancouver Summit, I'd like to kickoff another round of Community Contributor Awards early to allow more time for submissions. The idea is to give recognition to those that are undervalued, don't know their are appreciated, bind the community together, keep things fun, or challenge some norm. It can be someone that does a dirty job no one wants to do, someone that steps up into an new role, or someone that is new to the community but giving it their all. Basically nominate anyone you think deserves an award :) [1] There are A LOT of people out there that could use a pat on the back and affirmation that they do good work for the community. Please submit your nominations by *May 14th. * Winners will be announced at the feedback session in Vancouver. -Kendall Nelson (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/cca_nominations_vancouver -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 22 01:24:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 21 Feb 2018 20:24:38 -0500 Subject: [openstack-dev] [all][release] "why do I have new tags for old versions?" Message-ID: <1519262442-sup-6100@lrrr.local> The release team is doing some housekeeping work in openstack/releases and in the course of that work we modified the deliverable files for some very old series. Including all the way back to Austin. Because those files changed, the tag-releases job ran. And because, apparently, some of the very very old tags had never been imported into git, they were added today based on the SHAs we had established when we first imported the history into the repository. So, if you notice very old tags like "2010.1" showing up in places, that's why. Our shiny and modern build machinery doesn't work with the state of those repos from that long ago, so aside from the tags we don't think anything else was rebuilt (no build artifacts on tarballs.o.o were changed, for example). Doug From zhang.lei.fly at gmail.com Thu Feb 22 01:53:49 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Thu, 22 Feb 2018 09:53:49 +0800 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Kendall, I added the Kolla Team to 10:20-10:30 on Thursday On Thu, Feb 22, 2018 at 7:48 AM, Kendall Nelson wrote: > Hello Everyone! > > I just wanted to remind you all that you have till *Monday Feburary 26th* > to sign up if your team or group is interested in a team photo on the Croke > Park pitch! We still have slots available Tuesday afternoon and Thursday > morning. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > >> This link might work better for everyone: >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoT >> ypX66eNURsopQY/edit?usp=sharing >> >> -Kendall (diablo_rojo) >> >> >> On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson >> wrote: >> >>> Hello PTLs and SIG Chairs! >>> >>> So here's the deal, we have 50 spots that are first come, first >>> served. We have slots available before and after lunch both Tuesday and >>> Thursday. >>> >>> The google sheet here[1] should be set up so you have access to edit, >>> but if you can't for some reason just reply directly to me and I can add >>> your team to the list (I need team/sig name and contact email). >>> >>> I will be locking the google sheet on *Monday February 26th so I need >>> to know if your team is interested by then. * >>> >>> See you soon! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] https://docs.google.com/spreadsheets/d/ >>> 1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>> >>> >>> >>> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 22 04:14:47 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Feb 2018 13:14:47 +0900 Subject: [openstack-dev] [QA] Meeting Thursday Feb 22nd at 8:00 UTC Message-ID: Hello everyone, Hope everyone is back from vacation. QA team is resuming the regular weekly meeting from today. OpenStack QA team IRC meeting will be Thursday, Feb 22nd at 8:00 UTC in the #openstack-meeting channel. The agenda for the meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Feb_22nd_2018_.280800_UTC.29 Anyone is welcome to add an item to the agenda. -gmann From jianghua.wang at citrix.com Thu Feb 22 09:58:42 2018 From: jianghua.wang at citrix.com (Jianghua Wang) Date: Thu, 22 Feb 2018 09:58:42 +0000 Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens Message-ID: Thomas, H. and Bob, Please note only the scripts under "os_xenapi/dom0/etc/xapi.d/plugins/" will run in dom0 only. During deployment an OpenStack environment, we usually copy the plugins into dom0 from the installed package (installed in DomU). In this way, it helps us to ensure the plugins are from the same release as the remaining part (e.g. the wrapper APIs invoked by Nova/Neutron/Ceilometer). Otherwise if we split plugins out, it will be difficult to ensure the compatibility. So I'd suggest we keep these plugins in the package. I had a chat with Bob on how to resolve the python 2 issue by include these plugins. We think a solution is to rename those plugins without the .py suffix so that they won't be treated as python files. Thomas, please help to confirm if it works for packaging. I can take responsibility to handle the needed change in os-xenapi. Thanks, Jianghua From: Bob Ball [mailto:bob.ball at citrix.com] Sent: Friday, February 16, 2018 3:02 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens Hi, > If this code is meant to run on Dom0, fine, then we won't package it, > but we also have to decouple that dependency from Nova, Neutron, > Ceilometer etc... to either communicate directly through an API > endpoint or a light wrapper around it. There is already a light wrapper here - other parts of os-xenapi provide the API to Nova/Neutron/etc which make calls through to the plugins in Dom0. These projects should now know nothing about the actual plugins or how they are called. Bob ________________________________ From: Haïkel > Sent: Thursday, 15 February 2018 6:39 p.m. To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens 2018-02-15 11:25 GMT+01:00 Bob Ball >: > Hi Thomas, > > As noted on the patch, XenServer only has python 2 (and some versions of XenServer even has Python 2.4) in domain0. This is code that will not run in Debian (only in XenServer's dom0) and therefore can be ignored or removed from the Debian package. > It's not practical to convert these to support python 3. > > Bob >H. We're not there yet but we also plan to work on migrating RDO to Python 3. And I have to disagree, this code is called by other projects and their tests, so it will likely be an impediment in migrating OpenStack to Python 3, not just a "packaging" issue. If this code is meant to run on Dom0, fine, then we won't package it, but we also have to decouple that dependency from Nova, Neutron, Ceilometer etc... to either communicate directly through an API endpoint or a light wrapper around it. Regards, H. > -----Original Message----- > From: Thomas Goirand [mailto:zigo at debian.org] > Sent: 15 February 2018 08:31 > To: openstack-dev at lists.openstack.org > Subject: [openstack-dev] Debian OpenStack packages switching to Py3 for Queens > > Hi, > > Since I'm getting some pressure from other DDs to actively remove Py2 support from my packages, I'm very much considering switching all of the Debian packages for Queens to using exclusively Py3. I would have like to read some opinions about this. Is it a good time for such move? I hope it is, because I'd like to maintain as few Python package with Py2 support at the time of Debian Buster freeze. > > Also, doing Queens, I've noticed that os-xenapi is still full of py2 only stuff in os_xenapi/dom0. Can we get those fixes? Here's my patch: > > https://review.openstack.org/544809 > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 22 10:01:02 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 22 Feb 2018 02:01:02 -0800 Subject: [openstack-dev] [PTG] StoryBoard Discussion Planning In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 11:38 AM, Kendall Nelson wrote: > Hello Everyone! > > We have tentatively scheduled StoryBoard discussions Wednesday morning and > will officialize it in the PTGbot soon. If there are specific things you > wish to discuss please add them to our planning etherpad[1]! If you think > you might attend, please also add your name to the list so we know you're > interested. > > See you all next week! > > -Kendall (diablo_rojo) > > [1] https://etherpad.openstack.org/p/StoryBoard-Rocky-PTG > > Hey Kendall, Thanks a lot for making this happen. Unfortunately I won't be able to attend it since it overlaps with the TripleO sessions I really need to attend. But hopefully we can catch-up anytime during the week to discuss about TripleO / Storyboard future. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 22 10:04:37 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 22 Feb 2018 02:04:37 -0800 Subject: [openstack-dev] [tripleo] Queens retrospective at PTG Message-ID: TripleO team, At the PTG we'll start by a collective retrospective on how it went during Queens: https://etherpad.openstack.org/p/tripleo-ptg-rocky-retro Any early thoughts are more than welcome. As usual we'll try to make it dynamic and we hope everything can participate and share their voice! Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From edouard.thuleau at gmail.com Thu Feb 22 11:41:22 2018 From: edouard.thuleau at gmail.com (=?UTF-8?Q?=C3=89douard_Thuleau?=) Date: Thu, 22 Feb 2018 12:41:22 +0100 Subject: [openstack-dev] [nova] Contrail VIF TAP plugging broken In-Reply-To: <3a33a37e-d329-cde3-9551-2afff09842ea@gmail.com> References: <3a33a37e-d329-cde3-9551-2afff09842ea@gmail.com> Message-ID: Thanks a lot Matt, Jay and Dan for your reactivity and your time. Édouard. On Wed, Feb 21, 2018 at 9:50 PM, Matt Riedemann wrote: > On 2/21/2018 4:30 AM, Édouard Thuleau wrote: > >> Hi Seán, Michael, >> >> Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to >> plug TAP on the Contrail software switch (named vrouter) [2]. I proposed a >> fix in the beginning of the year [3] but it still pending approval even it >> got a couple of +1 and no negative feedback. It's why I'm writing that >> email to get your attention. >> That issue appeared during the Queens development cycle and we need to >> fix that before it was released (hope we are not to late). >> Contrail already started to move on os-vif driver [4]. A first VIF type >> driver is there for DPDK case [5], we plan to do the same for the TAP case >> in the R release and remove the Nova VIF plugging code for the vrouter. >> >> [1] https://review.openstack.org/#/c/515916/ >> [2] https://bugs.launchpad.net/nova/+bug/1742963 >> [3] https://review.openstack.org/#/c/533212/ >> [4] https://github.com/Juniper/contrail-nova-vif-driver >> [5] https://review.openstack.org/#/c/441183/ >> >> Regards, >> Édouard. >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > Approved the change on master and working on the backport to > stable/queens. We'll be cutting an RC3 tomorrow so I'll make sure this gets > into that. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 22 13:49:35 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 13:49:35 -0000 Subject: [openstack-dev] [glance] glance 16.0.0.0rc3 (queens) Message-ID: Hello everyone, A new release candidate for glance for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/glance/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/glance/log/?h=stable/queens Release notes for glance can be found at: http://docs.openstack.org/releasenotes/glance/ From no-reply at openstack.org Thu Feb 22 14:03:50 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 14:03:50 -0000 Subject: [openstack-dev] [manila] manila 6.0.0.0rc3 (queens) Message-ID: Hello everyone, A new release candidate for manila for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/manila/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/manila/log/?h=stable/queens Release notes for manila can be found at: http://docs.openstack.org/releasenotes/manila/ From gmann at ghanshyammann.com Thu Feb 22 14:11:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Feb 2018 23:11:06 +0900 Subject: [openstack-dev] [QA][PTG] QA Dinner Night Message-ID: Hi All, I'd like to propose a QA Dinner night for the people attending QA sessions at the Dublin PTG. I initiated a doodle vote [1] to choose the appropriate date. Please vote as per your availability. ..1 https://doodle.com/poll/t7phezrq25zrqzz3 -gmann From james.page at ubuntu.com Thu Feb 22 14:30:35 2018 From: james.page at ubuntu.com (James Page) Date: Thu, 22 Feb 2018 14:30:35 +0000 Subject: [openstack-dev] [charms] [ptg] charms dinner Message-ID: Hi Team As I'm only managing to get to the PTG for Mon/Tues lets schedule a dinner for Monday night; I'll sort out a venue - lemme know direct this week if you'll be coming along! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Thu Feb 22 14:43:12 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Thu, 22 Feb 2018 14:43:12 +0000 Subject: [openstack-dev] [masakari] [masakari-monitors] : Masakari notification failed. Message-ID: <47EFB32CD8770A4D9590812EE28C977E96253B42@ALA-MBD.corp.ad.wrs.com> Good for now. The issue should be related to some situations that VM stop instance task is taking longer and it seems one of the periodic task is timing out. To avoid the exception, may try to increase some of the timeout values. Or increase the looping interval for retry… Thanks. LK From: Kwan, Louie Sent: Tuesday, February 20, 2018 5:17 PM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: [openstack-dev] [masakari] [masakari-monitors] : Masakari notification failed. Hi Masakari community, I would like to get your help to understand what may be causing the Masakari notification failed. I do get success cases which the engine got the notification, VM got shutdown and rebooted ok. Having said that, there are some cases that the notification failed and it seems there are some conflicts going on. 20% to 40% chance. Feb 20 21:53:21 masakari-2 masakari-engine[3807]: 2018-02-20 21:53:21.517 WARNING masakari.engine.drivers.taskflow.driver [req-ce909151-1afb-4f2f-abf4-f25d54f25c6b service None] Task 'masakari.engine.drivers.taskflow.instance_failure.StopInstanceTask;instance:recovery' (e85dec06-1498-482c-a63a-51f855745c32) transitioned into state 'FAILURE' from state 'RUNNING' Feb 20 21:53:21 masakari-2 masakari-engine[3807]: 1 predecessors (most recent first): Feb 20 21:53:21 masakari-2 masakari-engine[3807]: Flow 'instance_recovery_engine': Conflict: Conflict Is it normal that masakari notification would be failed because of timing or conflicting events? FYI, I only have one VM and one active notification. Enclosed is the log file I got from the engine. I do appreciate if anyone of you can provide some insight what to do with the failure. Any tip where to look at etc? Timeout? Thanks. Louie | notification_uuid | generated_time | status | type | source_host_uuid | payload | +--------------------------------------+----------------------------+----------+------+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ | 42ccee84-0ea5-4163-84a5-028a0bb914a3 | 2018-02-20T21:52:03.000000 | failed | VM | 66c8b5b9-03f5-4843-8a9c-fa83af807a9b | {u'instance_uuid': u'565da9ba-3c0c-4087-83ca-32a5a1b00a55', u'vir_domain_event': u'STOPPED_FAILED', u'event': u'QEMU_GUEST_AGENT_ERROR'} | | aa4184f3-b002-4ba8-a403-f22ccd4ce6b5 | 2018-02-20T21:42:54.000000 | finished | VM | 66c8b5b9-03f5-4843-8a9c-fa83af807a9b | {u'instance_uuid': u'565da9ba-3c0c-4087-83ca-32a5a1b00a55', u'vir_domain_event': u'STOPPED_FAILED', u'event': u'QEMU_GUEST_AGENT_ERROR'} | -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 22 14:49:57 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 14:49:57 -0000 Subject: [openstack-dev] [murano] murano 5.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for murano for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/murano/log/?h=stable/queens Release notes for murano can be found at: http://docs.openstack.org/releasenotes/murano/ From no-reply at openstack.org Thu Feb 22 15:16:07 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 15:16:07 -0000 Subject: [openstack-dev] [murano] murano-dashboard 5.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for murano-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/murano-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/murano-dashboard/log/?h=stable/queens Release notes for murano-dashboard can be found at: http://docs.openstack.org/releasenotes/murano-dashboard/ From no-reply at openstack.org Thu Feb 22 15:20:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 15:20:47 -0000 Subject: [openstack-dev] [keystone] keystone 13.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for keystone for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/keystone/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/keystone/log/?h=stable/queens Release notes for keystone can be found at: http://docs.openstack.org/releasenotes/keystone/ From jaypipes at gmail.com Thu Feb 22 16:04:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 22 Feb 2018 11:04:43 -0500 Subject: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core In-Reply-To: <6ff0919e-fb4f-3dc4-1ece-8c10da273724@lab.ntt.co.jp> References: <1dc00987-28a6-c9d0-6e70-0a9346edd3f9@gmail.com> <6ff0919e-fb4f-3dc4-1ece-8c10da273724@lab.ntt.co.jp> Message-ID: Hi Takashi, As you know, nova-core and python-novaclient-core memberships are evaluated separately (they are different teams with some overlap in membership). While your contributions to Nova are absolutely appreciated and recognized, there is a different group that votes on nova-core membership and that group evaluates nominations differently. Please take your python-novaclient-core membership as an indication that you are on the right direction and that you have been recognized as being proficient in the client code. Doing in-depth Nova code reviews with lots of questions to code submitters and other reviewers alike will take you that extra step to a nova-core nomination. All the best, -jay On Sun, Feb 18, 2018 at 5:56 PM, Takashi Natsume < natsume.takashi at lab.ntt.co.jp> wrote: > Thank you, Matt and everyone. > > But I would like to become a core reviewer for the nova project as well as > python-novaclient. > I have contributed more in the nova project than python-novaclient. > I have done total 2,700+ reviews for the nova project in all releases (*1). > (Total 115 reviews only for python-novaclient.) > > *1: http://stackalytics.com/?release=all&user_id=natsume-takashi > > On 2018/02/16 2:18, Matt Riedemann wrote: > >> On 2/9/2018 9:01 AM, Matt Riedemann wrote: >> >>> I'd like to add Takashi to the python-novaclient core team. >>> >>> python-novaclient doesn't get a ton of activity or review, but Takashi >>> has been a solid reviewer and contributor to that project for quite awhile >>> now: >>> >>> http://stackalytics.com/report/contribution/python-novaclient/180 >>> >>> He's always fast to get new changes up for microversion support and help >>> review others that are there to keep moving changes forward. >>> >>> So unless there are objections, I'll plan on adding Takashi to the >>> python-novaclient-core group next week. >>> >> >> I've added Takashi to python-novaclient-core: >> >> https://review.openstack.org/#/admin/groups/572,members >> >> Thanks everyone. >> >> > Regards, > Takashi Natsume > NTT Software Innovation Center > E-mail: natsume.takashi at lab.ntt.co.jp > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Feb 22 16:22:40 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 22 Feb 2018 10:22:40 -0600 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Well, with PTG just around the corner, and the API-SIG portion of that just 4 days away, we elected to have a quick meeting today. The only topic we needed to discuss was the results of our votes for the topics to cover on Monday. We used an etherpad with the proposed topics [7], and edleafe compiled the results here [8]. Since "microversions" appears in one of the topics, I'm sure we'll be in discussions all day. So if you will be in Dublin next week, the API-SIG is meeting all day Mondy. Please join us! As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://etherpad.openstack.org/p/api-sig-ptg-rocky [8] https://ethercalc.openstack.org/xja22ghws13i Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From sean.mcginnis at gmx.com Thu Feb 22 16:33:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 22 Feb 2018 10:33:55 -0600 Subject: [openstack-dev] [release] Release countdown for week R-0, February 24 - March 3 Message-ID: <20180222163354.GA19895@sm-xps> This is the final countdown email for the Queens development cycle. Thanks to everyone involved in the Queens release! Development Focus ----------------- Teams attending the PTG should be preparing for those discussions and capturing information in the etherpads: https://wiki.openstack.org/wiki/PTG/Rocky/Etherpads General Information ------------------- The release team plans on doing the final Queens release on 26 February. That's right, I said February. We will re-tag the last commit used for the final RC using the final version number. The cycle-trailing projects will need to have release candidates posted by 1 March. They will then have two weeks before their release deadline on 15 March. If you have not already done so, now would be a good time to take a look at the Rocky schedule and start planning team activities: https://releases.openstack.org/rocky/schedule.html Actions --------- PTLs and release liaisons should watch for the final release patch from the releae team. While not required, we would appreciate having an ack from each team before we approve it on the 26th. Upcoming Deadlines & Dates -------------------------- Rocky PTG in Dublin: Week of February 26, 2018 Queens cycle-trailing RC1 deadline: March 1 Queens cycle-trailing final release: March 15 -- Sean McGinnis (smcginnis) From no-reply at openstack.org Thu Feb 22 16:39:53 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 16:39:53 -0000 Subject: [openstack-dev] nova_powervm 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for nova_powervm for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/queens Release notes for nova_powervm can be found at: http://docs.openstack.org/releasenotes/nova_powervm/ From no-reply at openstack.org Thu Feb 22 16:53:15 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 16:53:15 -0000 Subject: [openstack-dev] networking-powervm 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for networking-powervm for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/networking-powervm/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/networking-powervm/log/?h=stable/queens Release notes for networking-powervm can be found at: http://docs.openstack.org/releasenotes/networking-powervm/ From pkovar at redhat.com Thu Feb 22 17:02:40 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 22 Feb 2018 18:02:40 +0100 Subject: [openstack-dev] [docs][ptg][all] Documentation help for project teams Message-ID: <20180222180240.e7396115456f3c58eac0812a@redhat.com> Hi all, Similarly to the Queens PTG, we will have docs people around at the Rocky PTG from Wednesday through Friday next week to help project teams with their docs! People from the docs team can meet with your project team to discuss your documentation needs. Together, we can look into planning, structuring, building and publishing of your project documentation. The project team docs sessions are scheduled dynamically per the project and docs teams' needs. Please use the docs-i18n-ptg-rocky etherpad to sign up for your team (scroll down a bit): https://etherpad.openstack.org/p/docs-i18n-ptg-rocky See you at the PTG! Cheers, pk From ifat.afek at nokia.com Thu Feb 22 17:16:39 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 22 Feb 2018 17:16:39 +0000 Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior In-Reply-To: References: <2E8BC35D-3FC3-40C1-85F2-09E4C3D4BB2E@nokia.com> Message-ID: Hi Paul, Unfortunately I can’t figure out from the log what went wrong. It seems like the ‘up’ alarms are ignored. Two things that I would try next: · Try calling an event with ‘status’:’up’ and see if it works. This is working for sure in my environment · I suspect that the problem is somewhere in AlarmDriverBase._filter_and_cache_alarms(). Basically it should search the old alarm in the cache and update it. Try to add many debug messages so we could see the cache, the new alarm and the old alarm. Let me know if it helped. Ifat. From: Paul Vaduva Date: Wednesday, 21 February 2018 at 19:11 To: "Afek, Ifat (Nokia - IL/Kfar Sava)" , "OpenStack Development Mailing List (not for usage questions)" Cc: Ciprian Barbu Subject: RE: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Link to cuted log version https://hastebin.com/upokifinuq.py Plus full graph.log attached plus code for driver.py with Logging modifications Thanks, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 21, 2018 6:18 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Ciprian Barbu Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I suggest that you do the following: · Add a LOG message at the end of _get_alarms to print all alarms that are returned by this function · Restart vitrage-graph and send me its log. I’d like to see if there is any difference between the alarm that is raised and the alarm that is deleted. Thanks, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 21 February 2018 at 16:30 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior I attached also the driver.py that I am using. From: Paul Vaduva [mailto:Paul.Vaduva at enea.com] Sent: Wednesday, February 21, 2018 3:22 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: [Attachment removed] Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Sorry for the late reply. To answer your questions I started as an example from the doctor datasource (or a porting of it for the 1.3.0 version of vitrage) but will call it something different so no need to worry about conflicting with present doctor datasource. I added polling alarms to it but I have a more particular use case: * I get compute host down alarm on event * I can't get host up event or it's an intricate sollution to implement I tried to see if I can make the following scenario work: Let's call Scenario I * Get a compute host down event (Raisng an alarm) * Periodically poll for the status of the compute in method "def _get_alarms(self):" of the Driver object Both type of Interactions seem to work (polling and event based). However now comes the tricky part. I would need for the alarms (with status up / compute host up) returned by method "def _get_alarms(self):" of this Driver object to cancel/clear the compute host down alarms raised by event. This unfortunatelly does not happen. Oddely enough there is a mimic of this scenario that works but is not robust enough for out needs. Let's call Scenario II: * Gettting an event with compute host down(when one of our compute actually goes down) * Polling alarm (also compute host down) is raised and somehow overwrites the event based one (I can see the updated time). * After a while the actual compute reboots and polling for the alarms returns an alarm with status up that in this case clears the previous (I assume polling type now) alarm. Now I can't understand why this second scenario works and the first one does not. It seems as the same alarm type (compute host down with status down) obtained by polling can overwrite an identical type and status alarm raised by event, but An alarm with an updated status (i. e. up) got by polling mode cannot overwrite / clear and alarm with status down got by an event. I am wondering if there is a reason of this behavior and if there is a way to modify it or is it a bug. For the event's generation I use modified version of zabbix_vitrage.py script that publishes to rabbitmq vitrage_notifications.info queue. I have attached this python script. The code is still experimental But I wanted to know if it's logically posible to create The scenario we need, Scenario I. Best Regards Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 7:16 PM To: OpenStack Development Mailing List (not for usage questions) > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, I’m glad that my fix helped. Regarding the Doctor datasource: the purpose of this datasource was to be used by the Doctor test scripts. Do you intend to modify it, or to create a new similar datasource that also supports polling? Modifying the existing datasource could be problematic, since we need to make sure the existing functionality and tests stay the same. In general, most of our datasources support both polling and notifications. A simple example is the Cinder datasource [1]. For example of an alarm datasource, you can look at Zabbix datasource [2]. You can also go over the documentation of how to add a new datasource [3]. As for your question, it is the responsibility of the datasource to clear the alarms that it created. For the Doctor datasource, you can send an event with “status”:”up” in the details and the datasource will clear the alarm. [1] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/cinder/volume [2] https://github.com/openstack/vitrage/tree/master/vitrage/datasources/zabbix [3] https://docs.openstack.org/vitrage/latest/contributor/add-new-datasource.html Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 15:50 To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Ciprian Barbu > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Ifat, Yes I’ve checked the 1.3.1 refers to a deb package (python-vitrage) version built by us, so the git tag used to build that deb is 1.3.0. But I also backported doctor datasource from vitreage git master branch. I also noticed that when I configure snapshots_interval=10 I also get this exception in /var/log/vitrage/graph.log around the time the alarms disapear. https://hastebin.com/ukisajojef.sql I've cherry picked your before mentioned change and the alarm that came from event is now persistent and the exception is gone. So it was a bug. I understand that for doctor datasources I need to have events for raising the alarm and also for clearing it is that correct? Best Regards, Paul From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, February 7, 2018 1:24 PM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Paul, It sounds like a bug. Alarms created by a datasource are not supposed to be deleted later on. It might be a bug that was fixed in Queens [1]. I’m not sure which Vitrage version you are actually using. I failed to find a vitrage version 1.3.1. Could it be that you are referring to a version of python-vitrageclient or vitrage-dashboard? In any case, if you are using an older version, I suggest that you try to use the fix that I mentioned [1] and see if it helps. [1] https://review.openstack.org/#/c/524228 Best Regards, Ifat. From: Paul Vaduva > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 February 2018 at 11:58 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [vitrage] Vitrage alarm processing behavior Hi Vitrage developers, I have a question about vitrage innerworkings, I ported doctor datasource from master branch to an earlier version of vitrage (1.3.1). I noticed some behavior I am wondering if it's ok or it is bug of some sort. Here it is: 1. I am sending some event for rasing an alarm to doctor datasource of vitrage. 2. I am receiving the event hence the alarm is displayed on vitrage dashboard attached to the affected resource (as expected) 3. If I have configured snapshot_interval=10 in /etc/vitrage/vitrage.conf The alarm disapears after a while fragment from /etc/vitrage/vitrage.conf *************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor snapshots_interval=10 *************** On the other hand if I comment it out the alarm persists ************** [datasources] types = nova.host,nova.instance,nova.zone,cinder.volume,neutron.network,neutron.port,doctor #snapshots_interval=10 ************** I am interested if this behavior is correct or is this a bug. My intention is to create some sort of hybrid datasource starting from the doctor one, that receives events for raising alarms like compute.host.down but uses polling to clear them. Best Regards, Paul Vaduva -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 22 19:00:46 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 19:00:46 -0000 Subject: [openstack-dev] [nova] nova 17.0.0.0rc3 (queens) Message-ID: Hello everyone, A new release candidate for nova for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/nova/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/nova/log/?h=stable/queens Release notes for nova can be found at: http://docs.openstack.org/releasenotes/nova/ From lbragstad at gmail.com Thu Feb 22 19:26:25 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 22 Feb 2018 13:26:25 -0600 Subject: [openstack-dev] [keystone] PTG etherpads Message-ID: Hey all, The schedule has been posted for a while, but I went through and filled out some etherpads for each topic. All of them should be discoverable from the main keystone etherpad [0]. This applies to the identity integration etherpad as well [1]. As always, there is usually some amount of level-setting required before we jump into discussions. Less time getting people up-to-speed means more time discussing possible solutions. If you're interested in a particular topic, please familiarize yourself with the *context* section of the etherpad, if available. If there is a specific topic you'd like to elaborate on, please don't hesitate to add to the discussion early. Thanks, Lance [0] https://etherpad.openstack.org/p/keystone-rocky-ptg [1] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From shilla.saebi at gmail.com Thu Feb 22 19:40:05 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Thu, 22 Feb 2018 14:40:05 -0500 Subject: [openstack-dev] [User-committee] [Openstack-operators] User Committee Elections Message-ID: Hi Everyone, Just a friendly reminder that voting is still open! Please be sure to check out the candidates - https://goo.gl/x183he - and vote before February 25th, 11:59 UTC. Thanks! Shilla On Mon, Feb 19, 2018 at 1:38 PM, wrote: > I saw election email with the pointer to votes. > > See no reason for stopping it now. But extending vote for 1 more week > makes sense. > > Thanks, > Arkady > > > > *From:* Melvin Hillsman [mailto:mrhillsman at gmail.com] > *Sent:* Monday, February 19, 2018 11:32 AM > *To:* user-committee ; OpenStack > Mailing List ; OpenStack Operators < > openstack-operators at lists.openstack.org>; OpenStack Dev < > openstack-dev at lists.openstack.org>; community at lists.openstack.org > *Subject:* [Openstack-operators] User Committee Elections > > > > Hi everyone, > > > > We had to push the voting back a week if you have been keeping up with the > UC elections[0]. That being said, election officials have sent out the poll > and so voting is now open! Be sure to check out the candidates - > https://goo.gl/x183he - and get your vote in before the poll closes. > > > > [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > > > -- > > Kind regards, > > Melvin Hillsman > > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Feb 22 20:23:33 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 20:23:33 -0000 Subject: [openstack-dev] [congress] congress 7.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for congress for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/congress/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/congress/log/?h=stable/queens Release notes for congress can be found at: http://docs.openstack.org/releasenotes/congress/ From aspiers at suse.com Thu Feb 22 20:23:22 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 22 Feb 2018 20:23:22 +0000 Subject: [openstack-dev] Etherpad for self-healing In-Reply-To: References: Message-ID: <20180222202322.xkk5nuyjrc53zvks@pacific.linksys.moosehall> Furukawa, Yushiro wrote: >Hi everyone, > >I am seeing Self-healing scheduled on Tuesday afternoon[1], but the etherpad for it is not listed in [2]. >I made following etherpad by some chance. Thanks! You beat me to it ;-) >Would it be possible to update Etherpads wiki page? Done. >https://etherpad.openstack.org/p/self-healing-ptg-rocky I'm also adding some more ideas for topics to the etherpad, and then I'll (re-)announce it here and also to openstack-{operators,sigs} to promote visibility. From no-reply at openstack.org Thu Feb 22 20:36:18 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 22 Feb 2018 20:36:18 -0000 Subject: [openstack-dev] [designate] designate-dashboard 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for designate-dashboard for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/designate-dashboard/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/designate-dashboard/log/?h=stable/queens Release notes for designate-dashboard can be found at: http://docs.openstack.org/releasenotes/designate-dashboard/ From rosmaita.fossdev at gmail.com Thu Feb 22 21:21:46 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 22 Feb 2018 16:21:46 -0500 Subject: [openstack-dev] [glance] priorities for the coming week (23 Feb - 1 March) Message-ID: Hello Glancers, RC-3 was released today [0] and we expect it to be the final release candidate (unless someone finds a critical problem in the next 3 hours). Priority for this week is preparing for the PTG. If you can't attend, Erno has posted a topic list [1] you can look at. For topics without specs, feel free to leave questions and comments for items you would like to see addressed. For topics with specs proposed, please read and comment on the gerrit reviews. Erno has also posted a rough schedule of what we'll be discussing when [2]. We'll keep an eye on the #openstack-glance IRC channel during the PTG, but your best bet for getting our attention is to put comments on the etherpad [1]. The Glance meeting is cancelled on 1 March due to the PTG. Best wishes for safe travels for those attending the PTG, and for those at home ... stay safe out there! cheers, brian [0] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127642.html [1] https://etherpad.openstack.org/p/glance-rocky-ptg [2] https://ethercalc.openstack.org/5ow7tkekq151 From tpb at dyncloud.net Thu Feb 22 21:29:19 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 22 Feb 2018 16:29:19 -0500 Subject: [openstack-dev] [manila] queens retrospective as we kickoff Rocky Message-ID: <20180222212919.nkt7x7i4y5mshbda@barron.net> We'll start the manila meetings Tuesday with a retrospective on the Queens cycle. Whether you'll be at the PTG or not, please add your thoughts to the retrospective etherpad [1] so we can discuss them and figure out how to continuously improve. Please tag items with your nick. We'll make sure to follow up post-PTG in our weekly meetings on items with owners or stakeholders not actually present at the PTG. Thanks! -- Tom [1] https://etherpad.openstack.org/p/manila-ptg-rocky-retro -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From aspiers at suse.com Thu Feb 22 21:30:10 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 22 Feb 2018 21:30:10 +0000 Subject: [openstack-dev] [self-healing][PTG] etherpad for PTG session on self-healing Message-ID: <20180222213010.wxsmgwvdy6vlwxgi@pacific.linksys.moosehall> Hi all, Yushiro kindly created an etherpad for the self-healing SIG session at the Dublin PTG on Tuesday afternoon next week, and I've fleshed it out a bit: https://etherpad.openstack.org/p/self-healing-ptg-rocky Anyone with an interest in self-healing is of course very welcome to attend (or keep an eye on it remotely!) This SIG is still very young, so it's a great chance for you to shape the direction it takes :-) If you are able to attend, please add your name, and also feel free to add topics which you would like to see covered. It would be particularly helpful if operators could participate and share their experiences of what is or isn't (yet!) working with self-healing in OpenStack, so that those of us on the development side can aim to solve the right problems :-) Thanks, and see some of you in Dublin! Adam From amoralej at redhat.com Thu Feb 22 21:49:05 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 22 Feb 2018 22:49:05 +0100 Subject: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version In-Reply-To: <20180221230616.GA11131@thor.bakeyournoodle.com> References: <20180218003536.GY23143@thor.bakeyournoodle.com> <20180219232420.GB23143@thor.bakeyournoodle.com> <20180221230616.GA11131@thor.bakeyournoodle.com> Message-ID: On Thu, Feb 22, 2018 at 12:06 AM, Tony Breeds wrote: > On Wed, Feb 21, 2018 at 08:53:09PM +0100, Alfredo Moralejo Alonso wrote: > > > Short version is: > > > > 1. We generate a review to rdoinfo (RDO's package database) every time a > > change is detected in upper-constraints.txt proposing it as candidate in > > dependencies repo. > > 2. A job in rdoinfo gate detects if the required version is available in > > fedora. If so, it tries to rebuild it for CentOS and add it to CentOS > > dependencies repo. If the version is not available in fedora or can not > be > > rebuilt for CentOS, the review fails in gate and a manual action is > > required. > > 3. If the dependency can be rebuilt from fedora, a change is proposed to > > promote it to the testing phase (the one used in upstream gate jobs for > > master). A set of jobs deploying OpenStack with packstack, > > puppet-openstack-integration and tripleo are executed to gate the > > dependency update. When the review is merged, the new or updated > dependency > > is pushed to the RDO repo. > > See it was just my lack of imagination. I guess I was getting hung up > on step 2 but the fall back to manual action is pretty reasonable there. > It'll be interesting to see how often that happens, and also what > happens in the scenario where the RDO package has diverged slightly from > Fedora. > > > If you are interested i can discuss the implementation details. > > Yeah I'd very much like to grab 10-15 mins of your time next week, if > you're cool with that. > > Sure, i'll be happy to discuss about current status and plans we have to improve it. > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Feb 22 23:29:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 22 Feb 2018 18:29:21 -0500 Subject: [openstack-dev] [ptl][all][python3] collecting current status of python 3 support in projects Message-ID: <1519341965-sup-8914@lrrr.local> I am trying to update the wiki document with the current state of support for Python 3 projects as part of preparing for a discussion about moving from "Python 2 first, then 3" to "Python 3 first, then 2" development. I have added the missing libraries and services (at least those managed by the release team) and done my best to figure out if there are unit and functional/integration test jobs for each project. I need your help to verify the information I have collected and fill in any gaps. Please look through the tables in [1] and if your projects' status is out of date either update the page directly or email me (off list) with the updates. Thanks! Doug [1] https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects From dsneddon at redhat.com Fri Feb 23 00:55:30 2018 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 22 Feb 2018 16:55:30 -0800 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 2:00 AM, Jiri Tomasek wrote: > > On Wed, Feb 14, 2018 at 11:16 PM, Ben Nemec > wrote: > >> >> >> On 02/09/2018 08:49 AM, Jiri Tomasek wrote: >> >>> *Step 2. network-environment -> NIC configs* >>> >>> Second step of network configuration is NIC config. For this >>> network-environment.yaml is used which references NIC config templates >>> which define network_config in their resources section. User is currently >>> required to configure these templates manually. We would like to provide >>> interactive view which would allow user to setup these templates using >>> TripleO UI. A good example is a standalone tool created by Ben Nemec [3]. >>> >>> There is currently work aimed for Pike to introduce jinja templating for >>> network environments and templates [4] (single-nic-with-vlans, >>> bond-with-vlans) to support composable networks and roles (integrate data >>> from roles_data.yaml and network_data.yaml) It would be great if we could >>> move this one step further by using these samples as a starting point and >>> let user specify full NIC configuration. >>> >>> Available information at this point: >>> - list of roles and networks as well as which networks need to be >>> configured at which role's NIC Config template >>> - os-net-config schema which defines NIC configuration elements and >>> relationships [5] >>> - jinja templated sample NIC templates >>> >>> Requirements: >>> - provide feedback to the user about networks assigned to role and have >>> not been configured in NIC config yet >>> >> >> I don't have much to add on this point, but I will note that because my >> UI is standalone and pre-dates composable networks it takes the opposite >> approach. As a user adds a network to a role, it exposes the configuration >> for that network. Since you have the networks ahead of time, you can >> obviously expose all of those settings up front and ensure the correct >> networks are configured for each nic-config. >> >> I say this mostly for everyone's awareness so design elements of my tool >> don't get copied where they don't make sense. >> >> - let user construct network_config section of NIC config templates for >>> each role (brigdes/bonds/vlans/interfaces...) >>> - provide means to assign network to vlans/interfaces and automatically >>> construct network_config section parameter references >>> >> >> So obviously your UI code is going to differ, but I will point out that >> the code in my tool for generating the actual os-net-config data is >> semi-standalone: https://github.com/cybertron/t >> ripleo-scripts/blob/master/net_processing.py >> >> It's also about 600 lines of code and doesn't even handle custom roles or >> networks yet. I'm not clear whether it ever will at this point given the >> change in my focus. >> >> Unfortunately the input JSON schema isn't formally documented, although >> the unit tests do include a number of examples. >> https://github.com/cybertron/tripleo-scripts/blob/master/tes >> t-data/all-the-things/nic-input.json covers quite a few different cases. >> >> - populate parameter definitions in NIC config templates based on >>> role/networks assignment >>> - populate parameter definitions in NIC config templates based on >>> specific elements which use them e.g. BondInterfaceOvsOptions in case when >>> ovs_bond is used >>> >> >> I guess there's two ways to handle this - you could use the new jinja >> templating to generate parameters, or you could handle it in the generation >> code. >> >> I'm not sure if there's a chicken-and-egg problem with the UI generating >> jinja templates, but that's probably the simplest option if it works. The >> approach I took with my tool was to just throw all the parameters into all >> the files and if they're unused then oh well. With jinja templating you >> could do the same thing - just copy a single boilerplate parameter header >> that includes the jinja from the example nic-configs and let the templating >> handle all the logic for you. >> >> It would be cleaner to generate static templates that don't need to be >> templated, but it would require re-implementing all of the custom network >> logic for the UI. I'm not sure being cleaner is sufficient justification >> for doing that. >> >> - store NIC config templates in deployment plan and reference them from >>> network-environment.yaml >>> >>> Problems to solve: >>> As a biggest problem to solve I see defining logic which would >>> automatically handle assigning parameters to elements in network_config >>> based on Network which user assigns to the element. For example: Using GUI, >>> user is creating network_config for compute role based on >>> network/config/multiple-nics/compute.yaml, user adds an interface and >>> assigns the interface to Tenant network. Resulting template should then >>> automatically populate addresses/ip_netmask: get_param: TenantIpSubnet. >>> Question is whether all this logic should live in GUI or should GUI pass >>> simplified format to Mistral workflow which will convert it to proper >>> network_config format and populates the template with it. >>> >> >> I guess the fact that I separated the UI and config generation code in my >> tool is my answer to this question. I don't remember all of my reasons for >> that design, but I think the main thing was to keep the input and >> generation cleanly separated. Otherwise there was a danger of making a UI >> change and having it break the generation process because they were tightly >> coupled. Having a JSON interface between the two avoids a lot of those >> problems. It also made it fairly easy to unit test the generation code, >> whereas trying to mock out all of the UI elements would have been a fragile >> nightmare. >> >> It does require a bunch of translation code[1], but a lot of it is fairly >> boilerplate (just map UI inputs to JSON keys). >> >> 1: https://github.com/cybertron/tripleo-scripts/blob/171aedabfe >> ad1f27f4dc0fce41a8b82da28923ed/net-iso-gen.py#L515 >> >> Hope this helps. > > > Ben, thanks a lot for your input. I think this makes the direction with > NIC configs clearer: > > 1. The generated template will include all possible parameters definitions > unless we find a suitable way of populating parameters section part of > template generation process. Note that current jinja templates for NIC > config (e.g. network/config/multiple-nics/role.role.j2.yaml:127) create > these definitions conditionally by specific role name which is not very > elegant in terms of custom roles. > This patch recently landed, which generates all the needed parameters in the sample NIC configs based on the composable networks defined in network_data.yaml: https://review.openstack.org/#/c/523638 Furthermore, this patch removes all the role-specific hard-coded templates, and generates templates based on the role-to-network association in roles_data.yaml. I think we could use this method to generate the needed parameters for the templates generated in the UI. I would personally like to see a workflow where the user chose one of the built-in NIC config designs to generate samples, which could then be further edited. Presenting a blank slate to the user, and requiring them to build up the hierarchy is very confusing unless the installer is very familiar with the desired architecture (first add a bridge, then add a bond to the bridge, then add interfaces to the bond, then add VLANs to the bridge). It's better to start with a basic example (VLANs on a single NIC, one NIC per network, DPDK, etc.), and allow the user to customize from there. > > 2. GUI is going to define forms to add/configure network elements > (interface, bridge, bond, vlan, ...) and provide user friendly way to > combine these together. The whole data construct (per Role) is going to be > sent to tripleo-common workflow as json. Workflow consumes json input and > produces final template yaml. I think we should be able to reuse bunch of > the logic which Ben already created. > > Example: > json input from GUI: > ..., { > type: 'interface', > name: 'nic1', > network_name_lower: 'external' > },... > transformed by tripleo-common: > ... > - type: interface > name: nic{{loop.index + 1}} > use_dhcp: false > addresses: > - ip_netmask: > get_param: {{network.name}}IpSubnet > ... > > With this approach, we'll create common API provided by Mistral to > generate NIC config templates which can be reused by CLI and other clients, > not TripleO UI specifically. Note that we will also need a 'reverse' > Mistral workflow which is going to convert template yaml network_config > into the input json format, so GUI can display current configuration to the > user and let him change that. > > Liz has updated network configuration wireframes which can be found here > https://lizsurette.github.io/OpenStack-Design/ > tripleo-ui/3-tripleo-ui-edge-cases/7.advancednetworkconfigurationan > dtopology . The goal is to provide a graphical network configuration > overview and let user perform actions from it. This ensures that with every > action performed, user immediately gets clear feedback on how does the > network configuration look. > > -- Jirka > I like the wireframes overall. However, I'm trying to avoid a flexible and open-ended configuration if it isn't clear what the final configuration should look like. We want to present the user with some basic forms, and let them modify those forms to their needs. > > > >> >> >> -Ben >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Dan Sneddon | Senior Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From namnh at vn.fujitsu.com Fri Feb 23 01:30:54 2018 From: namnh at vn.fujitsu.com (namnh at vn.fujitsu.com) Date: Fri, 23 Feb 2018 01:30:54 +0000 Subject: [openstack-dev] [barbican] weekly meeting time In-Reply-To: <1518535079.22990.9.camel@redhat.com> References: <1518535079.22990.9.camel@redhat.com> Message-ID: Hi Ade, The two options are good to me. I choose the second time. Thanks, Nam > -----Original Message----- > From: Ade Lee [mailto:alee at redhat.com] > Sent: Tuesday, February 13, 2018 10:18 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [barbican] weekly meeting time > > Hi all, > > The Barbican weekly meeting has been fairly sparsely attended for a little > while now, and the most active contributors these days appear to be in Asia. > > Its time to consider moving the weekly meeting to a time when more > contributors can attend. I'm going to propose a couple times below to start > out. > > 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) Tuesday > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday > > Feel free to propose other days/times. > > Thanks, > Ade > > P.S. Until decided otherwise, the Barbican meeting remains on Mondays at > 2000 UTC > > > > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Fri Feb 23 08:24:17 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 23 Feb 2018 17:24:17 +0900 Subject: [openstack-dev] [all] [QA] [interop] [keystone] [glance] [cinder] [tc] [ptg] QA Sessions for related cross projects Message-ID: Hi All, QA team is planning to discuss few topics in Dublin PTG which need discussion and attention from related projects. If you can plan or ask someone from your team to present your projects, it will be more productive discussions. I am listing 2 such topic: 1. "Interop test for adds-on project": Wed 1.30 -2.00 PM Related projects are interop, heat, designate, TC. This is open discussion topic in last cycle [1] and i do not think we have final consensus for this. Let's discuss it f2f and hopefully close this one with most agreed approach. 2. "Remove Deprecated APIs tests from Tempest." Wed 2.00 - 3.00 PM Related projects are glance, cinder, keystone. This is to remove the testing of Deprecated APIs. We have been discussing this over last couple of cycle but not finished yet. I would like to get related project agreement on removing their deprecated API testing and the best way to do that. I will prepare some sample patches to hava a glance of what is going to be removed and the approach of removal. More details on those sessions can be found on QA PTG etherpad - https://etherpad.openstack.org/p/qa-rocky-ptg ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html Have a safe travel and see you all in Dublin. -gmann From xinni.ge1990 at gmail.com Fri Feb 23 08:29:59 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 23 Feb 2018 17:29:59 +0900 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Message-ID: Hi there, We are aware of the javascript embedded issue, and working on it now, the patch will be summited later. As for the unittest failure, we are still investigating it. We will contant you as soon as we find out the cause. Sorry to bring troubles to you. We will be grateful if you could wait for a little longer. Best Regards, Xinni On Thu, Feb 22, 2018 at 5:41 AM, Thomas Goirand wrote: > On 02/21/2018 05:54 PM, Corey Bryant wrote: > > > > > > On Wed, Feb 21, 2018 at 9:35 AM, Thomas Goirand > > wrote: > > > > Hi there! > > > > I'm having big trouble package heat-dashboard for Debian. I hope I > can > > get help through this list. > > > > In here: > > > > heat_dashboard/static/dashboard/project/heat_ > dashboard/template_generator/js/ > > > > we have minified *only* versions of Javascript. > > > > > > There's also a bug open for this: > > https://bugs.launchpad.net/heat-dashboard/+bug/1747687 > > > > Regards, > > Corey > > Thanks for the link and filing this bug. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From vsaienko at mirantis.com Fri Feb 23 09:02:29 2018 From: vsaienko at mirantis.com (Vasyl Saienko) Date: Fri, 23 Feb 2018 11:02:29 +0200 Subject: [openstack-dev] [ironic] Stepping down from Ironic core Message-ID: Hey Ironic community! Unfortunately I don't work on Ironic as much as I used to any more, so i'm stepping down from core reviewers. So, thanks for everything everyone, it's been great to work with you all for all these years!!! Sincerely, Vasyl Saienko -------------- next part -------------- An HTML attachment was scrubbed... URL: From dong.wenjuan at zte.com.cn Fri Feb 23 09:05:55 2018 From: dong.wenjuan at zte.com.cn (dong.wenjuan at zte.com.cn) Date: Fri, 23 Feb 2018 17:05:55 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3RyaXBsZW9dW3ZpdHJhZ2Vd?= =?utf-8?q?_Draft_schedule_for_PTG?= In-Reply-To: References: CACu=hys1mfdT30WhQdSXXoak=vQj20hRCp5SpEB_AhX+bWdTgA@mail.gmail.com Message-ID: <201802231705555420751@zte.com.cn> SGkgRW1pbGllbk1hY2NoaSwNCg0KDQoNCg0KDQoNCg0KSSBhZGRlZCBhIHRvcGljIGFib3V0ICdz dXBwb3J0IFZpdHJhZ2UoUm9vdCBDYXVzZSBBbmFseXNpcyBwcm9qZWN0KSBzZXJ2aWNlJyBpbiB0 aGUgZXRoZXJwYWQgZm9yIFBURy4NCg0KDQpDYW4geW91IHBsZWFzZSBoZWxwIHRvIHNjaGVkdWxl IGEgdGltZSBzbG90PyBNYXliZSBhZnRlciB0aGUgdG9waWMgb2YgJ1VwZ3JhZGVzIGFuZCBVcGRh dGVzJywgIGhhbGYgYW4gaG91ciBpcyBlbm91Z2guDQoNCg0KV2Ugd2FudCB0byBzdXBwb3J0IHRo ZSBmZWF0dXJlIGluIHRoZSBUcmlwbGVPIHJvYWRtYXAgZm9yIFJvY2t5Lg0KDQoNClRoYW5rc34N Cg0KDQoNCg0KDQoNCkJSLA0KDQoNCmR3ag0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0K DQoNCg0KDQrljp/lp4vpgq7ku7YNCg0KDQoNCuWPkeS7tuS6uu+8miA8ZW1pbGllbkByZWRoYXQu Y29tPjsNCuaUtuS7tuS6uu+8miA8b3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3JnPjsN CuaXpSDmnJ8g77yaMjAxOOW5tDAy5pyIMjDml6UgMDA6MzgNCuS4uyDpopgg77yaW29wZW5zdGFj ay1kZXZdIFt0cmlwbGVvXSBEcmFmdCBzY2hlZHVsZSBmb3IgUFRHDQoNCg0KX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX18NCk9wZW5TdGFjayBEZXZlbG9wbWVudCBNYWlsaW5nIExpc3QgKG5vdCBmb3IgdXNhZ2Ug cXVlc3Rpb25zKQ0KVW5zdWJzY3JpYmU6IE9wZW5TdGFjay1kZXYtcmVxdWVzdEBsaXN0cy5vcGVu c3RhY2sub3JnP3N1YmplY3Q6dW5zdWJzY3JpYmUNCmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3Jn L2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2stZGV2DQoNCkFsZXggYW5kIEkgaGF2 ZSBiZWVuIHdvcmtpbmcgb24gdGhlIGFnZW5kYSBmb3IgbmV4dCB3ZWVrLCBiYXNlZCBvbiB3aGF0 IHBlb3BsZSBwcm9wb3NlZCBpbiB0b3BpY3MuDQpUaGUgZHJhZnQgY2FsZW5kYXIgaXMgdmlzaWJs ZSBoZXJlOg0KaHR0cHM6Ly9jYWxlbmRhci5nb29nbGUuY29tL2NhbGVuZGFyL2VtYmVkP3NyYz10 Z3BiNXR2MTJtbHU3a2dlNW9xZXJ0amU3OCU0MGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20mY3R6 PUV1cm9wZSUyRkR1Ymxpbg0KDQpBbHNvIHlvdSBjYW4gaW1wb3J0IHRoZSBJQ1MgZnJvbToNCmh0 dHBzOi8vY2FsZW5kYXIuZ29vZ2xlLmNvbS9jYWxlbmRhci9pY2FsL3RncGI1dHYxMm1sdTdrZ2U1 b3FlcnRqZTc4JTQwZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbS9wdWJsaWMvYmFzaWMuaWNzDQoN Ck5vdGUgdGhpcyBpcyBhIGRyYWZ0IC0gd2Ugd291bGQgbG92ZSB5b3VyIGZlZWRiYWNrIGFib3V0 IHRoZSBwcm9wb3NhbC4NCg0KU29tZSBzZXNzaW9ucyBtaWdodCBiZSB0b28gc2hvcnQgb3IgdG9v IGxvbmc/IFlvdSB0byB0ZWxsIHVzLiAoUGxlYXNlIGxvb2sgYXQgZXZlbnQgZGV0YWlscyBmb3Ig ZGVzY3JpcHRpb25zKS4NCkFsc28sIGZvciBlYWNoIHNlc3Npb24gd2UgbmVlZCBhICJkcml2ZXIi LCBwbGVhc2UgdGVsbCB1cyBpZiB5b3Ugdm9sdW50ZWVyIHRvIGRvIGl0Lg0KDQpQbGVhc2UgbGV0 IHVzIGtub3cgaGVyZSBhbmQgd2UnbGwgbWFrZSBhZGp1c3RtZW50LCB3ZSBoYXZlIHBsZW50eSBv ZiByb29tIGZvciBpdC4NCg0KVGhhbmtzIQ0KLS0gDQoNCg0KRW1pbGllbiBNYWNjaGk= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Fri Feb 23 10:28:59 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Fri, 23 Feb 2018 12:28:59 +0200 Subject: [openstack-dev] [tripleo] Draft schedule for PTG In-Reply-To: References: Message-ID: Could we change the Security talk to a day before Friday (both Thursday and Wednesday are fine)? Luke Hinds, the leader of the OpenStack Security group would like to join that discussion, but is not able to join that day. On Mon, Feb 19, 2018 at 6:37 PM, Emilien Macchi wrote: > Alex and I have been working on the agenda for next week, based on what > people proposed in topics. > > The draft calendar is visible here: > https://calendar.google.com/calendar/embed?src=tgpb5tv12mlu7kge5oqertje78% > 40group.calendar.google.com&ctz=Europe%2FDublin > > Also you can import the ICS from: > https://calendar.google.com/calendar/ical/tgpb5tv12mlu7kge5oqertje78% > 40group.calendar.google.com/public/basic.ics > > Note this is a draft - we would love your feedback about the proposal. > > Some sessions might be too short or too long? You to tell us. (Please look > at event details for descriptions). > Also, for each session we need a "driver", please tell us if you volunteer > to do it. > > Please let us know here and we'll make adjustment, we have plenty of room > for it. > > Thanks! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Fri Feb 23 09:57:19 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 23 Feb 2018 09:57:19 +0000 Subject: [openstack-dev] [ptl][all][python3] collecting current status of python 3 support in projects In-Reply-To: <1519341965-sup-8914@lrrr.local> References: <1519341965-sup-8914@lrrr.local> Message-ID: <126fd17e31fb40c089f7edc27b3fadb4@R01UKEXCASM126.r01.fujitsu.local> Hi Doug, I have added Monasca client, monasca-statsd library (green). I have also updated and added other Monasca components still not supporting Python 3. Best greetings Witek > -----Original Message----- > From: Doug Hellmann [mailto:doug at doughellmann.com] > Sent: Freitag, 23. Februar 2018 00:29 > To: openstack-dev > Subject: [openstack-dev] [ptl][all][python3] collecting current status of > python 3 support in projects > > I am trying to update the wiki document with the current state of support for > Python 3 projects as part of preparing for a discussion about moving from > "Python 2 first, then 3" to "Python 3 first, then 2" development. > > I have added the missing libraries and services (at least those managed by > the release team) and done my best to figure out if there are unit and > functional/integration test jobs for each project. > > I need your help to verify the information I have collected and fill in any gaps. > > Please look through the tables in [1] and if your projects' status is out of date > either update the page directly or email me (off > list) with the updates. > > Thanks! > Doug > > [1] > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_ > projects > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From arxcruz at redhat.com Fri Feb 23 11:49:22 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 23 Feb 2018 12:49:22 +0100 Subject: [openstack-dev] [tripleo] rdo cloud tenant reach 100 stacks Message-ID: Hello, We just notice that there are several jobs failing because the openstack-nodepool tenant reach 100 stacks and cannot create new ones. I notice there are several stacks created > 10 hours, and I'm manually deleting those ones. I don't think it will affect someone, but just in case, be aware of it. Kind regards, Arx Cruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Feb 23 11:51:20 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 23 Feb 2018 12:51:20 +0100 Subject: [openstack-dev] [ironic] Stepping down from Ironic core In-Reply-To: References: Message-ID: Hi Vasyl, I'm sad to hear it :( Thank YOU for everything! The only thing in the world more valuable than your contributions to ironic is the joy of hanging out with you at the events :) Good luck and do not disappear. Dmitry On 02/23/2018 10:02 AM, Vasyl Saienko wrote: > Hey Ironic community! > > Unfortunately I don't work on Ironic as much as I used to any more, so i'm > stepping down from core reviewers. > > So, thanks for everything everyone, it's been great to work with you > all for all these years!!! > > > Sincerely, > Vasyl Saienko > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From andrea.frittoli at gmail.com Fri Feb 23 12:01:28 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Fri, 23 Feb 2018 12:01:28 +0000 Subject: [openstack-dev] [gate][devstack][neutron][qa][release] Switch to lib/neutron in gate In-Reply-To: References: Message-ID: On Fri, Jan 19, 2018 at 8:36 PM Ihar Hrachyshka wrote: > Hi Andrea, > > thanks for taking time to reply. I left some answers inline. > > On Fri, Jan 19, 2018 at 9:08 AM, Andrea Frittoli > wrote: > > > > > > On Wed, Jan 17, 2018 at 7:27 PM Ihar Hrachyshka > wrote: > >> > >> Hi all, > > > > > > Hi! > >> > >> > >> tl;dr I propose to switch to lib/neutron devstack library in Queens. I > >> ask for buy-in to the plan from release and QA teams, something that > >> infra asked me to do. > Hello Ihar, I'm coming back to this thread after a while, since I'm writing zuulv3 native devstack jobs, and I think this is the perfect opportunity for starting to using lib/neutron, since we easily decide to limit the change to the branches we want, e.g. start with master and then go back to stable/queens if we want to. Zuulv3 native jobs run on master, queens and they're being ported to Pike. Starting from Queens on I plan to stop using the test-matrix from devstack-gate, and define the list of required services in jobs instead. This is made possible by the job inheritance capability introduced by Zuul v3. For the single node job I proposed a change already [0], and I'm working on the same for multinode. I would like if possible to start using new service and variable names as part of [0] but I need some help on that. This change in the zuulv3 jobs should replace the existing in progress devstack-gate patch [1]. If you are at the PTG I would be happy to chat / hack on this topic there. Andrea Frittoli (andreaf) [0] https://review.openstack.org/#/c/545633/ [1] https://review.openstack.org/#/c/436798 > >> > >> === > >> > >> Last several cycles we were working on getting lib/neutron - the new > >> in-tree devstack library to deploy neutron services - ready to deploy > >> configurations we may need in our gates. > > > > > > May I ask the reason for hosting this in the neutron tree? > > Sorry for wording it in a misleading way. The lib/neutron library is > in *devstack* tree: > https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/neutron > So in terms of deployment dependencies, there are no new repositories > to fetch from or gate against. > > > > >> > >> Some pieces of the work > >> involved can be found in: > >> > >> https://review.openstack.org/#/q/topic:new-neutron-devstack-in-gate > >> > >> I am happy to announce that the work finally got to the point where we > >> can consistently pass both devstack-gate and neutron gates: > >> > >> (devstack-gate) https://review.openstack.org/436798 > > > > > > Both legacy and new style (zuulv3) jobs rely on the same test matrix > code, > > so your change would impact both worlds consistently, which is good. > > > >> > >> > >> (neutron) https://review.openstack.org/441579 > >> > >> One major difference between the old lib/neutron-legacy library and > >> the new lib/neutron one is that service names for neutron are > >> different. For example, q-svc is now neutron-api, q-dhcp is now > >> neutron-dhcp, etc. (In case you wonder, this q- prefix links us back > >> to times when Neutron was called Quantum.) The way lib/neutron is > >> designed is that whenever a single q-* service name is present in > >> ENABLED_SERVICES, the old lib/neutron-legacy code is triggered to > >> deploy services. > >> > >> Service name changes are a large part of the work. The way the > >> devstack-gate change linked above is designed is that it changes names > >> for deployed neutron services starting from Queens (current master), > >> so old branches and grenade jobs are not affected by the change. > > > > > > Any other change worth mentioning? > > > > The new library is a lot more simplified and opinionated and has fewer > knobs and branching that is not very useful for majority of users. > lib/neutron-legacy was always known for its complicated configuration. > We hope that adopting the new library will unify and simplify neutron > configuration across different jobs and setups. > > From consumer perspective, nothing should change expect service names. > Some localrc files may need adoption if they rely on old arcane knobs. > It can be done during transition phase since old service names are > expected to work. > > >> > >> > >> While we validated the change switching to new names against both > >> devstack-gate and neutron gates that should cover 90% of our neutron > >> configurations, and followed up with several projects that - we > >> induced - may be affected by the change - there is always a chance > >> that some job in some project gate would fail because of it, and we > >> would need to push a (probably rather simple) follow-up to unbreak the > >> affected job. Due to the nature of the work, the span of impact, and > >> the fact that infra repos are not easily gated against with Depends-On > >> links, we may need to live with the risk. > >> > >> Of course, there are several aspects of the project life involved, > >> including QA and release delivery efforts. I was advised to reach out > >> to both of those teams to get a buy-in to proceed with the move. If we > >> have support for the switch now, as per Clark, infra is ready to > >> support the switch. > >> > >> Note that the effort span several cycles, partially due to low review > >> velocity in several affected repos (devstack, devstack-gate), > >> partially because new changes in all affected repos were pulling us > >> back from the end goal. This is one of the reasons why I would like us > >> to do the switch sooner rather than later, since chasing this moving > >> goalpost became rather burdensome. > >> > >> What are QA and release team thoughts on the switch? Are we ready to > >> do it in next weeks? > > > > > > If understood properly it would still be possible to use the old names > > right? > > Some jobs may not rely on test matrix and just hard code the list of > > services. > > Such jobs would be broken otherwise. > > > > What's the planned way forward towards removing the legacy lib? > > Yes, they should still work. > > My plan is to complete switch of devstack-gate to new names; once we > are sure all works as expected, we can proceed with replacing all q-* > service names still captured by codesearch.openstack.org with new > names; finally, remove lib/neutron-legacy in Rocky. (Note that the old > library already issues a deprecation warning since Newton: > https://review.openstack.org/#/c/315806/) > > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Fri Feb 23 13:20:34 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 23 Feb 2018 14:20:34 +0100 Subject: [openstack-dev] [tripleo] rdo cloud tenant reach 100 stacks In-Reply-To: References: Message-ID: Just an update, we cleaned up the stacks with more than 10 hours, jobs should be working properly now. Kind regards, Arx Cruz On Fri, Feb 23, 2018 at 12:49 PM, Arx Cruz wrote: > Hello, > > We just notice that there are several jobs failing because the > openstack-nodepool tenant reach 100 stacks and cannot create new ones. > > I notice there are several stacks created > 10 hours, and I'm manually > deleting those ones. > I don't think it will affect someone, but just in case, be aware of it. > > Kind regards, > Arx Cruz > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Feb 23 13:34:28 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 23 Feb 2018 14:34:28 +0100 Subject: [openstack-dev] Retirement of astara repos? In-Reply-To: <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> References: <572FF9CF-9AB5-4CBA-A4C8-26E7A012309E@gmx.com> <0DE3CB09-5CA1-4557-9158-C40F0FC37E6E@mcclain.xyz> Message-ID: On 2018-01-11 22:55, Mark McClain wrote: > Sean, Andreas- > > Sorry I missed Andres’ message earlier in December about retiring astara. Everyone is correct that development stopped a good while ago. We attempted in Barcelona to find others in the community to take over the day-to-day management of the project. Unfortunately, nothing sustained resulted from that session. > > I’ve intentionally delayed archiving the repos because of background conversations around restarting active development for some pieces bubble up from time-to-time. I’ll contact those I know were interested and try for a resolution to propose before the PTG. Mark, any update here? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From no-reply at openstack.org Fri Feb 23 14:19:47 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 23 Feb 2018 14:19:47 -0000 Subject: [openstack-dev] [heat] heat 10.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for heat for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/heat/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/heat/log/?h=stable/queens Release notes for heat can be found at: http://docs.openstack.org/releasenotes/heat/ From derekh at redhat.com Fri Feb 23 14:48:59 2018 From: derekh at redhat.com (Derek Higgins) Date: Fri, 23 Feb 2018 14:48:59 +0000 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On 1 February 2018 at 16:18, Emilien Macchi wrote: > On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins wrote: > [...] > >> o Should I create a new tempest test for baremetal as some of the >>>> networking stuff is different? >>>> >>> >>> I think we would need to run baremetal tests for this new featureset, >>> see existing files for examples. >>> >> Do you mean that we should use existing tests somewhere or create new >> ones? >> > > I mean we should use existing tempest tests from ironic, etc. Maybe just a > baremetal scenario that spawn a baremetal server and test ssh into it, like > we already have with other jobs. > Done, the current set of patches sets up a new non voting job "tripleo-ci-centos-7-scenario011-multinode-oooq-container" which setup up ironic in the overcloud and run the ironic tempest job "ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ops.BaremetalBasicOps.test_baremetal_server_ops" its currently passing so I'd appreciate a few eyes on it before it becomes out of date again there are 4 patches starting here https://review.openstack.org/#/c/509728/19 > > o Is running a script on the controller with NodeExtraConfigPost the best >>>> way to set this up or should I be doing something with quickstart? I don't >>>> think quickstart currently runs things on the controler does it? >>>> >>> >>> What kind of thing do you want to run exactly? >>> >> The contents to this file will give you an idea, somewhere I need to >> setup a node that ironic will control with ipmi >> https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml >> > > extraconfig works for me in that case, I guess. Since we don't productize > this code and it's for CI only, it can live here imho. > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Feb 23 15:33:35 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 23 Feb 2018 09:33:35 -0600 Subject: [openstack-dev] [all] [QA] [interop] [keystone] [glance] [cinder] [tc] [ptg] QA Sessions for related cross projects In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 2:24 AM, Ghanshyam Mann wrote: > Hi All, > > QA team is planning to discuss few topics in Dublin PTG which need > discussion and attention from related projects. If you can plan or ask > someone from your team to present your projects, it will be more > productive discussions. > > I am listing 2 such topic: > > 1. "Interop test for adds-on project": Wed 1.30 -2.00 PM > Related projects are interop, heat, designate, TC. This is open > discussion topic in last cycle [1] and i do not think we have final > consensus for this. Let's discuss it f2f and hopefully close this one > with most agreed approach. > > 2. "Remove Deprecated APIs tests from Tempest." Wed 2.00 - 3.00 PM > Related projects are glance, cinder, keystone. This is to remove the > testing of Deprecated APIs. We have been discussing this over last > couple of cycle but not finished yet. I would like to get related > project agreement on removing their deprecated API testing and the > best way to do that. I will prepare some sample patches to hava a > glance of what is going to be removed and the approach of removal. > ++ We have a couple open slots on Wednesday and Thursdays - so I moved our libraries session to give the keystone team more flexibility to attend the QA session [0]. [0] https://etherpad.openstack.org/p/keystone-rocky-ptg > > More details on those sessions can be found on QA PTG etherpad - > https://etherpad.openstack.org/p/qa-rocky-ptg > > ..1 http://lists.openstack.org/pipermail/openstack-dev/2018- > January/126146.html > > Have a safe travel and see you all in Dublin. > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Feb 23 16:10:59 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 23 Feb 2018 11:10:59 -0500 Subject: [openstack-dev] [ironic] Stepping down from Ironic core In-Reply-To: References: Message-ID: Thank you for all of your good work, Vasyl. Sorry to see you go, but hopefully you'll still be around IRC and such :) // jim On Fri, Feb 23, 2018 at 4:02 AM, Vasyl Saienko wrote: > Hey Ironic community! > > Unfortunately I don't work on Ironic as much as I used to any more, so i'm > stepping down from core reviewers. > > So, thanks for everything everyone, it's been great to work with you > all for all these years!!! > > > Sincerely, > Vasyl Saienko > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Fri Feb 23 16:22:33 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 23 Feb 2018 11:22:33 -0500 Subject: [openstack-dev] [tripleo] rdo cloud tenant reach 100 stacks In-Reply-To: References: Message-ID: <20180223162233.GA9979@localhost.localdomain> On Fri, Feb 23, 2018 at 02:20:34PM +0100, Arx Cruz wrote: > Just an update, we cleaned up the stacks with more than 10 hours, jobs > should be working properly now. > > Kind regards, > Arx Cruz > > On Fri, Feb 23, 2018 at 12:49 PM, Arx Cruz wrote: > > > Hello, > > > > We just notice that there are several jobs failing because the > > openstack-nodepool tenant reach 100 stacks and cannot create new ones. > > > > I notice there are several stacks created > 10 hours, and I'm manually > > deleting those ones. > > I don't think it will affect someone, but just in case, be aware of it. > > > > Kind regards, > > Arx Cruz > > Give that multinode jobs are first class citizen in zuulv3, I'd like to take some time at the PTG to discuss what would be needed to stop using heat for OVB and switch to nodepool. There are a number of reasons to do this, remove te-broker, remove heat dependency for testing, use common tooling, etc. I believe there is a CI session for tripelo one day, I was thinking of bringing it up then. Unless there is a better time. Paul From mike at openstack.org Fri Feb 23 16:53:23 2018 From: mike at openstack.org (Mike Perez) Date: Fri, 23 Feb 2018 08:53:23 -0800 Subject: [openstack-dev] Developer Mailing List Digest February 17-23rd Message-ID: <20180223165323.GC32596@openstack.org> HTML version: https://www.openstack.org/blog/?p=8332 Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Helpful PTG links ================== PTG is around the corner. Here are some helpful links: * Main welcome email http://lists.openstack.org/pipermail/openstack-dev/2018-February/127611.html * Quick links: http://ptg.openstack.org/ * PDF schedule: http://lists.openstack.org/pipermail/openstack-dev/attachments/20180221/5c279bb3/attachment-0002.pdf * PDf map for PTG venue: http://lists.openstack.org/pipermail/openstack-dev/attachments/20180221/5c279bb3/attachment-0003.pdf Success Bot Says ================ * mhayden: got centos OSA gate under 2h today * thingee: we have an on-boarding page and documentation for new contributors! [0] * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - https://www.openstack.org/community Thanks Bot Says =============== * Thanks pkovar for keep the Documentation team going! * Thanks pabelanger and infra for getting ubuntu mirrors repaired and backup quickly! * Thanks lbragstad for helping troubleshoot an intermittent fernet token validation failure in puppet gates * Thanks TheJulia for helping me with a problem last week, it was really a networking problem issue, like you said so :) * Thanks tosky for backporting devstack ansible changes to pike! * Thanks thingee for Thanks Bot * Thanks openstackstatus for logging our things * Thanks strigazi for the v1.9.3 image * Thanks smcginnis for not stopping this. * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks Community Summaries =================== * TC report [0] * POST /api-sig/news [1] * Release countdown [2] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127584.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127651.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127465.html Vancouver Community Contributor Awards ====================================== The Community contributor awards gives recognition to those that are undervalued, don't know they're appreciated, bind the community together, keep things fun, or challenge some norm. There are a lot of people out there that could use a pat on the back and affirmation that they do good work in the community. Nomination period is open now [0] until May 14th. Winners will be announced in feedback session at Vancouver. [0] - https://openstackfoundation.formstack.com/forms/cca_nominations_vancouver Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127634.html Release Naming For S - time to suggest a name! ============================================== It's time to pick a name for our "S" release! Since the associated Summit will be in Berlin, the Geographic location has been chosen as "Berlin" (state). Nominations are now open [0]. Rules and processes can be seen on the Governance site [1]. [0] - https://wiki.openstack.org/wiki/Release_Naming/S_Proposals [1] - https://governance.openstack.org/tc/reference/release-naming.html Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127592.html Final Queens RC Deadline ======================== Thursday 22nd of April is the deadline for any final Queens release candidates. We'll enter a quiet period for a week in preparation of tagging the final Queens release during the PTG week. Make sure if you have patches merged to stable/queens that you propose a new RC before the deadline. PTLs should watch for a patch from the release management team tagging the final release. While not required, an acknowledgement on the patch would be appreciated. Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127540.html Do Not Import oslo_db.tests.* ============================= Deprecations were made on oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase. In a patch [0], and assumption was made to that these should be imported from oslo_db.tests.sqlalchemy. Cinder, Ironic and Glance have been found with this issue [1]. Unfortunately these were not prefixed with underscores to comply with naming conventions for people to recognize private code. The tests module was included for consumers to run those tests on their own packages easily. [0] - https://review.openstack.org/#/c/522290/ [1] - http://codesearch.openstack.org/?q=oslo_db.tests&i=nope&files=&repos= Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127531 Some New Zuul Features ====================== Default timeout is 30 minutes for "post-run" phase of the job. A new attribute "timeout" [0] can set this to something else, which could be useful for a job that performs a long artifact upload. Two new job attributes added "host-vars" and "group-vars" [1] which behave like "vars" but applies to a specific host or group. [0] - https://docs.openstack.org/infra/zuul/user/config.html#attr-job.post-timeout [1] - https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host-vars Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127591.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From whayutin at redhat.com Fri Feb 23 17:09:29 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 23 Feb 2018 12:09:29 -0500 Subject: [openstack-dev] [ openstack-dev ][ tripleo ] unplanned outtage in RDO-Cloud Message-ID: Greetings, *The TripleO CI in RDO-Cloud has experienced an unplanned outage and is down at this time. We will update this thread with more information regarding when the CI will be brought back online as it becomes available.* Thank you! Wes Hayutin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lsurette at redhat.com Fri Feb 23 17:55:07 2018 From: lsurette at redhat.com (Liz Blanchard) Date: Fri, 23 Feb 2018 12:55:07 -0500 Subject: [openstack-dev] [TripleO][ui] Network Configuration wizard In-Reply-To: References: Message-ID: Hi All, I've made some additional updates to my wireframes[1] and I think they are in a good spot now for a discussion/review at the PTG next week. Please feel free to reach out with any questions or feedback! Thanks, Liz [1] https://lizsurette.github.io/OpenStack-Design/tripleo-ui/3-tripleo-ui-edge-cases/7.advancednetworkconfigurationandtopology On Thu, Feb 22, 2018 at 7:55 PM, Dan Sneddon wrote: > > > On Thu, Feb 15, 2018 at 2:00 AM, Jiri Tomasek wrote: > >> >> On Wed, Feb 14, 2018 at 11:16 PM, Ben Nemec >> wrote: >> >>> >>> >>> On 02/09/2018 08:49 AM, Jiri Tomasek wrote: >>> >>>> *Step 2. network-environment -> NIC configs* >>>> >>>> Second step of network configuration is NIC config. For this >>>> network-environment.yaml is used which references NIC config templates >>>> which define network_config in their resources section. User is currently >>>> required to configure these templates manually. We would like to provide >>>> interactive view which would allow user to setup these templates using >>>> TripleO UI. A good example is a standalone tool created by Ben Nemec [3]. >>>> >>>> There is currently work aimed for Pike to introduce jinja templating >>>> for network environments and templates [4] (single-nic-with-vlans, >>>> bond-with-vlans) to support composable networks and roles (integrate data >>>> from roles_data.yaml and network_data.yaml) It would be great if we could >>>> move this one step further by using these samples as a starting point and >>>> let user specify full NIC configuration. >>>> >>>> Available information at this point: >>>> - list of roles and networks as well as which networks need to be >>>> configured at which role's NIC Config template >>>> - os-net-config schema which defines NIC configuration elements and >>>> relationships [5] >>>> - jinja templated sample NIC templates >>>> >>>> Requirements: >>>> - provide feedback to the user about networks assigned to role and have >>>> not been configured in NIC config yet >>>> >>> >>> I don't have much to add on this point, but I will note that because my >>> UI is standalone and pre-dates composable networks it takes the opposite >>> approach. As a user adds a network to a role, it exposes the configuration >>> for that network. Since you have the networks ahead of time, you can >>> obviously expose all of those settings up front and ensure the correct >>> networks are configured for each nic-config. >>> >>> I say this mostly for everyone's awareness so design elements of my tool >>> don't get copied where they don't make sense. >>> >>> - let user construct network_config section of NIC config templates for >>>> each role (brigdes/bonds/vlans/interfaces...) >>>> - provide means to assign network to vlans/interfaces and automatically >>>> construct network_config section parameter references >>>> >>> >>> So obviously your UI code is going to differ, but I will point out that >>> the code in my tool for generating the actual os-net-config data is >>> semi-standalone: https://github.com/cybertron/t >>> ripleo-scripts/blob/master/net_processing.py >>> >>> It's also about 600 lines of code and doesn't even handle custom roles >>> or networks yet. I'm not clear whether it ever will at this point given >>> the change in my focus. >>> >>> Unfortunately the input JSON schema isn't formally documented, although >>> the unit tests do include a number of examples. >>> https://github.com/cybertron/tripleo-scripts/blob/master/tes >>> t-data/all-the-things/nic-input.json covers quite a few different cases. >>> >>> - populate parameter definitions in NIC config templates based on >>>> role/networks assignment >>>> - populate parameter definitions in NIC config templates based on >>>> specific elements which use them e.g. BondInterfaceOvsOptions in case when >>>> ovs_bond is used >>>> >>> >>> I guess there's two ways to handle this - you could use the new jinja >>> templating to generate parameters, or you could handle it in the generation >>> code. >>> >>> I'm not sure if there's a chicken-and-egg problem with the UI generating >>> jinja templates, but that's probably the simplest option if it works. The >>> approach I took with my tool was to just throw all the parameters into all >>> the files and if they're unused then oh well. With jinja templating you >>> could do the same thing - just copy a single boilerplate parameter header >>> that includes the jinja from the example nic-configs and let the templating >>> handle all the logic for you. >>> >>> It would be cleaner to generate static templates that don't need to be >>> templated, but it would require re-implementing all of the custom network >>> logic for the UI. I'm not sure being cleaner is sufficient justification >>> for doing that. >>> >>> - store NIC config templates in deployment plan and reference them from >>>> network-environment.yaml >>>> >>>> Problems to solve: >>>> As a biggest problem to solve I see defining logic which would >>>> automatically handle assigning parameters to elements in network_config >>>> based on Network which user assigns to the element. For example: Using GUI, >>>> user is creating network_config for compute role based on >>>> network/config/multiple-nics/compute.yaml, user adds an interface and >>>> assigns the interface to Tenant network. Resulting template should then >>>> automatically populate addresses/ip_netmask: get_param: TenantIpSubnet. >>>> Question is whether all this logic should live in GUI or should GUI pass >>>> simplified format to Mistral workflow which will convert it to proper >>>> network_config format and populates the template with it. >>>> >>> >>> I guess the fact that I separated the UI and config generation code in >>> my tool is my answer to this question. I don't remember all of my reasons >>> for that design, but I think the main thing was to keep the input and >>> generation cleanly separated. Otherwise there was a danger of making a UI >>> change and having it break the generation process because they were tightly >>> coupled. Having a JSON interface between the two avoids a lot of those >>> problems. It also made it fairly easy to unit test the generation code, >>> whereas trying to mock out all of the UI elements would have been a fragile >>> nightmare. >>> >>> It does require a bunch of translation code[1], but a lot of it is >>> fairly boilerplate (just map UI inputs to JSON keys). >>> >>> 1: https://github.com/cybertron/tripleo-scripts/blob/171aedabfe >>> ad1f27f4dc0fce41a8b82da28923ed/net-iso-gen.py#L515 >>> >>> Hope this helps. >> >> >> Ben, thanks a lot for your input. I think this makes the direction with >> NIC configs clearer: >> >> 1. The generated template will include all possible parameters >> definitions unless we find a suitable way of populating parameters section >> part of template generation process. Note that current jinja templates for >> NIC config (e.g. network/config/multiple-nics/role.role.j2.yaml:127) >> create these definitions conditionally by specific role name which is not >> very elegant in terms of custom roles. >> > > This patch recently landed, which generates all the needed parameters in > the sample NIC configs based on the composable networks defined in > network_data.yaml: > https://review.openstack.org/#/c/523638 > > Furthermore, this patch removes all the role-specific hard-coded > templates, and generates templates based on the role-to-network association > in roles_data.yaml. > > I think we could use this method to generate the needed parameters for the > templates generated in the UI. I would personally like to see a workflow > where the user chose one of the built-in NIC config designs to generate > samples, which could then be further edited. Presenting a blank slate to > the user, and requiring them to build up the hierarchy is very confusing > unless the installer is very familiar with the desired architecture (first > add a bridge, then add a bond to the bridge, then add interfaces to the > bond, then add VLANs to the bridge). It's better to start with a basic > example (VLANs on a single NIC, one NIC per network, DPDK, etc.), and allow > the user to customize from there. > > >> >> 2. GUI is going to define forms to add/configure network elements >> (interface, bridge, bond, vlan, ...) and provide user friendly way to >> combine these together. The whole data construct (per Role) is going to be >> sent to tripleo-common workflow as json. Workflow consumes json input and >> produces final template yaml. I think we should be able to reuse bunch of >> the logic which Ben already created. >> >> Example: >> json input from GUI: >> ..., { >> type: 'interface', >> name: 'nic1', >> network_name_lower: 'external' >> },... >> transformed by tripleo-common: >> ... >> - type: interface >> name: nic{{loop.index + 1}} >> use_dhcp: false >> addresses: >> - ip_netmask: >> get_param: {{network.name}}IpSubnet >> ... >> >> With this approach, we'll create common API provided by Mistral to >> generate NIC config templates which can be reused by CLI and other clients, >> not TripleO UI specifically. Note that we will also need a 'reverse' >> Mistral workflow which is going to convert template yaml network_config >> into the input json format, so GUI can display current configuration to the >> user and let him change that. >> >> Liz has updated network configuration wireframes which can be found here >> https://lizsurette.github.io/OpenStack-Design/tripleo- >> ui/3-tripleo-ui-edge-cases/7.advancednetworkconfigurationandtopology . >> The goal is to provide a graphical network configuration overview and let >> user perform actions from it. This ensures that with every action >> performed, user immediately gets clear feedback on how does the network >> configuration look. >> >> -- Jirka >> > > I like the wireframes overall. However, I'm trying to avoid a flexible and > open-ended configuration if it isn't clear what the final configuration > should look like. We want to present the user with some basic forms, and > let them modify those forms to their needs. > > >> >> >> >>> >>> >>> -Ben >>> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Dan Sneddon | Senior Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > dsneddon:irc | @dxs:twitter > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 23 19:07:19 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 23 Feb 2018 13:07:19 -0600 Subject: [openstack-dev] [all] Please abandon new stable/newton patches Message-ID: <20180223190718.GA29577@sm-xps> It appears some cleanup work being done in the openstack/releases repo incorrectly recreated stable/newton branches for many projects. Along with the branch creation are the three patches to update reno in master and update .gitreview and constraints in the stable branch. This was not intentional, and none of these patches should be merged. If you see them for any of your repos, please just abandon those patches. We will work on cleaning up the old branches ASAP. Thanks for your help. Sean From ramamani.yeleswarapu at intel.com Fri Feb 23 19:46:45 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Fri, 23 Feb 2018 19:46:45 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Fix the multitenant grenade - https://bugs.launchpad.net/ironic/+bug/1744139 - Testing another possibility - Disable .pyc file creation https://review.openstack.org/544750 MERGED - Avoids library incompatibility issue by disabling .pyc files from being written to disk in the scenario. - backport to stable/queens: https://review.openstack.org/#/c/545089/ MERGED - The nova issue noted under critical bugs is also needed to make multitenant grenade reliable again. - CI and docs work for classic drivers deprecation (see status below) - Required Backports/Nice to haves below - CRITICAL bugs (must be fixed and backported to queens before the release) - ironic port list fix: https://review.openstack.org/545069 MERGED - backport: https://review.openstack.org/#/c/545892/MERGED - Nova - Placement has issues after upgrade if ironic is unreachable for too long - Current WIP: https://review.openstack.org/#/c/545479/ - https://bugs.launchpad.net/nova/+bug/1750450 - Prepare for the PTG - https://etherpad.openstack.org/p/ironic-rocky-ptg Required Queens Backports ------------------------- - Fix for incorrect query during timeout check: https://review.openstack.org/545041 MERGED - queens: https://review.openstack.org/#/c/545806/ MERGED - Problem with traits and cleaning: https://bugs.launchpad.net/ironic/+bug/1750027 - https://review.openstack.org/#/c/545830/ MERGED - backport to stable/queens: https://review.openstack.org/#/c/546830/ MERGED - Problem with unrescue and netboot: https://review.openstack.org/#/c/544278/ MERGED - https://review.openstack.org/#/c/546026/ MERGED - rescue and UEFI: https://review.openstack.org/#/c/545186/ MERGED - backport: https://review.openstack.org/#/c/546955/ MERGED - configdrive overflow: https://review.openstack.org/#/c/334967/ MERGED - backport: https://review.openstack.org/#/c/546551/ MERGED - detached VIF reappearing: https://bugs.launchpad.net/ironic/+bug/1750785 - workaround: https://review.openstack.org/546584 abandoned - decided to revert the original patch: https://review.openstack.org/546705 MERGED - backport to stable/queens: https://review.openstack.org/546719 APPROVED Nice to have backports ---------------------- - Ansible docs - https://review.openstack.org/#/c/525501/ MERGED - backport https://review.openstack.org/#/c/546079/ MERGED - inspector: do not try passing non-MACs as switch_id: https://review.openstack.org/542214 APPROVED - stable/queens - https://review.openstack.org/543961 MERGED - Fix for CLEANING on conductor restart: https://review.openstack.org/349971 MERGED - backport: https://review.openstack.org/#/c/545893/ MERGED - Reset reservations on take over: https://review.openstack.org/546273 Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 12 Feb 2018 and 19 Feb 2018) - Ironic: 208 bugs (-1) + 243 wishlist items (-4). 3 new (+1), 155 in progress (-2), 1 critical, 33 high (+4) and 23 incomplete (+3) - Inspector: 13 bugs (-4) + 26 wishlist items (+1). 0 new, 12 in progress (-2), 0 critical (-2), 2 high (-1) and 4 incomplete - Nova bugs with Ironic tag: 16 (+2). 2 new (+1), 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 12 Feb 2017: - TODO: - API-SIG guideline on consuming versions in SDKs https://review.openstack.org/532814 on review - establish foundation for using version negotiation in nova - nothing more for Queens. Stay tuned... - need to make sure that we discuss/agree with nova about how to do this Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 19 Feb 2017: - switch documentation to hardware types: - need help from vendors updating their pages! - ilo: https://review.openstack.org/#/c/542593/ - irmc: https://review.openstack.org/#/c/541171/ MERGED - idrac looks fine for now - api-ref examples: TODO - ironic-inspector: https://review.openstack.org/#/c/545285/ - migration of CI to hardware types - IPA: TODO - ironic-lib: TODO? - python-ironicclient: TODO? - python-ironic-inspector-client: TODO? - virtualbmc: TODO? Traits support planning (mgoddard, johnthetubaguy, dtantsur) ------------------------------------------------------------ - status as of 12 Feb 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - traits API: - need to validate node's instance_info['traits'] at deploy time (https://bugs.launchpad.net/ironic/+bug/1722194/comments/31) - https://review.openstack.org/#/c/543461 - will need to backport this to stable/queens - notes on next steps: https://etherpad.openstack.org/p/ironic-node-instance-traits Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 19 Feb 2017: - dtantsur is returning to this after the release - TheJulia suggested we do it right on the PTG - list of cases from the PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ MERGED - Devstack changes to enable testing add support for rescue mode: https://review.openstack.org/#/c/524118/ - We need to be careful with this, in that we can't use python-ironicclient changes that have not been released. - Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Bugs: - unrescue fails with partition user image: https://review.openstack.org/#/c/544278/ - rescue ramdisk doesn't boot on UEFI: https://review.openstack.org/#/c/545186/ - Can't Merge until we do a client release with rescue support (in Rocky): - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ Ansible deploy interface (pas-ha) --------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ansible-deploy-driver.html - status as of 5 Feb 2017: - code merged, CI coverage via the standalone job - docs: https://review.openstack.org/#/c/525501/ MERGED OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --Rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 23 21:46:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 23 Feb 2018 15:46:02 -0600 Subject: [openstack-dev] [all] Please abandon new stable/newton patches In-Reply-To: <20180223190718.GA29577@sm-xps> References: <20180223190718.GA29577@sm-xps> Message-ID: <20180223214601.GA5777@sm-xps> On Fri, Feb 23, 2018 at 01:07:19PM -0600, Sean McGinnis wrote: > It appears some cleanup work being done in the openstack/releases repo > incorrectly recreated stable/newton branches for many projects. Along with the > branch creation are the three patches to update reno in master and update > .gitreview and constraints in the stable branch. > > This was not intentional, and none of these patches should be merged. If you > see them for any of your repos, please just abandon those patches. We will work > on cleaning up the old branches ASAP. > > Thanks for your help. > > Sean Just for closure - all patches on the erroneous branches were abandoned and the branches have been deleted. All should be back to normal now. Please let us know in #openstack-release if you see anything to indicate otherwise. Thanks! Sean From whayutin at redhat.com Fri Feb 23 22:40:57 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 23 Feb 2018 17:40:57 -0500 Subject: [openstack-dev] [ openstack-dev ][ tripleo ] unplanned outtage in RDO-Cloud In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 12:09 PM, Wesley Hayutin wrote: > Greetings, > > *The TripleO CI in RDO-Cloud has experienced an unplanned outage and is > down at this time. We will update this thread with more information > regarding when the CI will be brought back online as it becomes available.* > > Thank you! > Wes Hayutin > FYI.. The latest estimate for the unplanned outtage to TripleO-CI in RDO-Cloud is that it will take a number of business days to resolve the issues. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Feb 23 23:47:23 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Fri, 23 Feb 2018 17:47:23 -0600 Subject: [openstack-dev] [cinder][ptg] Cinder Dinner and Ghost Tour Night ... Message-ID: <798dd66d-365c-6798-7105-401fc3c45574@gmail.com> Team, There are a couple of events I am setting up that I would like you aware of: First, is more details on the dinner Thursday night we already planned.  Dinner will be at 7:45 at J. Sheehan's Irish Pub: http://sheehanspub.com/  I had dinner there tonight and it was wonderful.  Think it will work great for our group.  I have made reservations for those who have responded on the Etherpad. https://etherpad.openstack.org/p/cinder-ptg-rocky  If you want to add your name, please do so by EOD Tuesday as I need to confirm the number of people. Second, I am putting together a Ghost tour night on Wednesday 2/28/18.  The Gravedigger Ghost Tour comes highly recommended and is with a tour group that we have had very good experience with. I have not made official arrangements here, but anyone who wants to join me that night is more than welcome.  You can book tickets here: https://www.viator.com/tours/Dublin/Dublin-Gravedigger-Ghost-Tour/d503-5299GRAVE?&mobile_redirect=no&pref=02&aid=gdsarlsa&mcid=28353&tsem=true&supag=52446860344&supkl=kl&supsc=s&supai=216090288444&supap=1t1&supdv=c&supnt=g&supti=aud-298992519106:dsa-348108672214&suplp=1007850&supli=&gclid=EAIaIQobChMIxO6o5pq92QIVqLvtCh2vYgdcEAAYASAAEgLPWPD_BwE Look forward to seeing you all next week! Ping me on IRC if you have any questions. Jay From emilien at redhat.com Sat Feb 24 00:15:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 24 Feb 2018 00:15:12 +0000 Subject: [openstack-dev] [tripleo] Draft schedule for PTG In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 10:28 AM, Juan Antonio Osorio wrote: > Could we change the Security talk to a day before Friday (both Thursday > and Wednesday are fine)? Luke Hinds, the leader of the OpenStack Security > group would like to join that discussion, but is not able to join that day. > Done, moved to Thursday afternoon. Hope it works for you! > > On Mon, Feb 19, 2018 at 6:37 PM, Emilien Macchi > wrote: > >> Alex and I have been working on the agenda for next week, based on what >> people proposed in topics. >> >> The draft calendar is visible here: >> https://calendar.google.com/calendar/embed?src=tgpb5tv12mlu7 >> kge5oqertje78%40group.calendar.google.com&ctz=Europe%2FDublin >> >> Also you can import the ICS from: >> https://calendar.google.com/calendar/ical/tgpb5tv12mlu7kge5o >> qertje78%40group.calendar.google.com/public/basic.ics >> >> Note this is a draft - we would love your feedback about the proposal. >> >> Some sessions might be too short or too long? You to tell us. (Please >> look at event details for descriptions). >> Also, for each session we need a "driver", please tell us if you >> volunteer to do it. >> >> Please let us know here and we'll make adjustment, we have plenty of room >> for it. >> >> Thanks! >> -- >> Emilien Macchi >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Feb 24 00:15:36 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 24 Feb 2018 00:15:36 +0000 Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3RyaXBsZW9dW3ZpdHJhZ2Vd?= =?utf-8?q?_Draft_schedule_for_PTG?= In-Reply-To: <201802231705555420751@zte.com.cn> References: <201802231705555420751@zte.com.cn> Message-ID: On Fri, Feb 23, 2018 at 9:05 AM, wrote: > Hi EmilienMacchi, > > > I added a topic about 'support Vitrage(Root Cause Analysis project) > service' in the etherpad for PTG. > > Can you please help to schedule a time slot? Maybe after the topic of > 'Upgrades and Updates', half an hour is enough. > > We want to support the feature in the TripleO roadmap for Rocky. > Cool, let's have this on Friday. (Schedule updated) > Thanks~ > > > BR, > > dwj > > > > > > 原始邮件 > *发件人:* ; > *收件人:* ; > *日 期 :*2018年02月20日 00:38 > *主 题 :**[openstack-dev] [tripleo] Draft schedule for PTG* > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Alex and I have been working on the agenda for next week, based on what > people proposed in topics. > The draft calendar is visible here: > https://calendar.google.com/calendar/embed?src=tgpb5tv12mlu7kge5oqertje78% > 40group.calendar.google.com&ctz=Europe%2FDublin > > Also you can import the ICS from: > https://calendar.google.com/calendar/ical/tgpb5tv12mlu7kge5oqertje78% > 40group.calendar.google.com/public/basic.ics > > Note this is a draft - we would love your feedback about the proposal. > > Some sessions might be too short or too long? You to tell us. (Please look > at event details for descriptions). > Also, for each session we need a "driver", please tell us if you volunteer > to do it. > > Please let us know here and we'll make adjustment, we have plenty of room > for it. > > Thanks! > -- > Emilien Macchi > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Feb 24 00:43:00 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 24 Feb 2018 00:43:00 +0000 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 2:48 PM, Derek Higgins wrote: > > > On 1 February 2018 at 16:18, Emilien Macchi wrote: > >> On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins wrote: >> [...] >> >>> o Should I create a new tempest test for baremetal as some of the >>>>> networking stuff is different? >>>>> >>>> >>>> I think we would need to run baremetal tests for this new featureset, >>>> see existing files for examples. >>>> >>> Do you mean that we should use existing tests somewhere or create new >>> ones? >>> >> >> I mean we should use existing tempest tests from ironic, etc. Maybe just >> a baremetal scenario that spawn a baremetal server and test ssh into it, >> like we already have with other jobs. >> > Done, the current set of patches sets up a new non voting job > "tripleo-ci-centos-7-scenario011-multinode-oooq-container" which setup up > ironic in the overcloud and run the ironic tempest job > "ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ > ops.BaremetalBasicOps.test_baremetal_server_ops" > > its currently passing so I'd appreciate a few eyes on it before it becomes > out of date again > there are 4 patches starting here https://review.openstack. > org/#/c/509728/19 > Nice! http://logs.openstack.org/28/509728/21/check/tripleo-ci-centos-7-scenario011-multinode-oooq-container/68cb9f4/logs/tempest.html.gz Thanks for this work! We'll make sure that lands soon. > > >> >> o Is running a script on the controller with NodeExtraConfigPost the best >>>>> way to set this up or should I be doing something with quickstart? I don't >>>>> think quickstart currently runs things on the controler does it? >>>>> >>>> >>>> What kind of thing do you want to run exactly? >>>> >>> The contents to this file will give you an idea, somewhere I need to >>> setup a node that ironic will control with ipmi >>> https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml >>> >> >> extraconfig works for me in that case, I guess. Since we don't productize >> this code and it's for CI only, it can live here imho. >> >> Thanks, >> -- >> Emilien Macchi >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Sat Feb 24 01:41:13 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 23 Feb 2018 20:41:13 -0500 Subject: [openstack-dev] [ openstack-dev ][ tripleo ] unplanned outtage in RDO-Cloud In-Reply-To: References: Message-ID: Please be wary of approving changes since the Third Party CI is out of order until this is resolved. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Fri, Feb 23, 2018 at 5:40 PM, Wesley Hayutin wrote: > > > On Fri, Feb 23, 2018 at 12:09 PM, Wesley Hayutin > wrote: >> >> Greetings, >> >> The TripleO CI in RDO-Cloud has experienced an unplanned outage and is >> down at this time. We will update this thread with more information >> regarding when the CI will be brought back online as it becomes available. >> >> >> Thank you! >> Wes Hayutin > > > FYI.. > The latest estimate for the unplanned outtage to TripleO-CI in RDO-Cloud is > that it will take a number of business days to resolve the issues. > > Thank you! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Sat Feb 24 02:17:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Feb 2018 20:17:55 -0600 Subject: [openstack-dev] [nova] [placement] resource providers update 18-07 In-Reply-To: References: Message-ID: <7791b0e6-87b0-2466-64f9-58061fb6245e@gmail.com> On 2/16/2018 7:54 AM, Chris Dent wrote: > Before I get to the meat of this week's report, I'd like to request > some feedback from readers on how to improve the report. Over its > lifetime it has grown and it has now reached the point that while it > tries to give the impression of being complete, it never actually is, > and is a fair chunk of work to get that way. > > So perhaps there is a way to make it a bit more focused and thus bit > more actionable. If there are parts you can live without or parts you > can't live without, please let me know. > > One idea I've had is to do some kind of automation to make it what > amounts to a dashboard, but I'm not super inclined to do that because > the human curation has been useful for me. If it's not useful for > anyone else, however, then that's something to consider. -1 on a dashboard unless it's just something like a placement-specific review dashboard, but you'd have to star or somehow label placement-specific patches. I appreciate the human thought/comments on the various changes for context. I don't think I'd remove anything. One thing to maybe add is work on the osc-placement plugin: https://review.openstack.org/#/q/status:open+project:openstack/osc-placement+branch:master+topic:bp/placement-osc-plugin-rocky We did a grind through a bunch of those in Queens and made good progress on providing a minimal useful set of CLIs in osc-placement 1.0.0, so I'd like to see that continue, especially as deployments are upgrading to the point of needing to interact with placement from an ops perspective. -- Thanks, Matt From liu.xuefeng1 at zte.com.cn Sat Feb 24 06:13:56 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Sat, 24 Feb 2018 14:13:56 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3NlbGYtaGVhbGluZ11bUFRH?= =?utf-8?q?=5D_etherpad_for_PTG_session_onself-healing?= In-Reply-To: <20180222213010.wxsmgwvdy6vlwxgi@pacific.linksys.moosehall> References: 20180222213010.wxsmgwvdy6vlwxgi@pacific.linksys.moosehall Message-ID: <201802241413562711451@zte.com.cn> SGkgQWRhbVNwaWVycywNCg0KDQoNCg0KDQoNCkhlcmUgaXMgYSBicmllZiBzdW1tYXJ5IGFib3V0 IFNlbmxpbiBwcm9qZWN0DQoNCg0KU2VubGluIFByb2plY3QgKENsdXN0ZXJpbmcgc2VydmljZTog aHR0cHM6Ly93aWtpLm9wZW5zdGFjay5vcmcvd2lraS9TZW5saW4gICkNCg0KICAgc3VtbWFyeTog DQoNCkRlcGxveSBDbHVzdGVycy9ub2RlcyB3aXRoIHByb2ZpbGUNCg0KUHJvZmlsZSBtYW5hZ2Vy LCBQb2xpY3kgbWFuYWdlciBmb3IgdGhlIGNsdXN0ZXIuUG9saWN5IGV4YW1wbGVzOlNjYWxpbmcs IEhlYWx0aCwgTG9hZC1CYWxhbmNpbmcsIEFmZmluaXR5LFJlZ2lvbiBQbGFjZW1lbnQsWm9uZSBQ bGFjZW1lbmV0IGV0Yy4NCg0KQ2x1c3Rlci9Ob2RlIE1hbmFnZSggb3BlbnN0YWNrIGNsdXN0ZXIg LS1oZWxwKTpzY2FsZSBvdXQvaW4sIHJlc2l6ZSwgcmVjb3ZlciwgbWVtYmVyIG1hbmFnZXIsIGNs dXN0ZXIvbm9kZSBvcGVyYXRpb24uIG5vZGUgYWRvcHQsIGV2ZW50IG1hbmFnZXIsIGFjdGlvbiBt YW5hZ2VyIGV0Yy4NCg0KQ3JlYXRlIFN0YW5kYnkvQWN0aXZlIENsdXN0ZXIgYnkgbW92ZSBub2Rl IGFtb25nIGRpZmZlcmVudCBjbHVzdGVycw0KDQpDbHVzdGVyIHN1cHBvcnRlZCBub2RlIHR5cGVz Ok5vdmEgc2VydmVyLCBIZWF0IHN0YWNrLCBORlYgVkRVLCBLOHNbMV1bMl1bM10gLmV0Yy4NCg0K UmVjZWl2ZXJzOk1lc3NhZ2luZyhlZy4gWmFxYXIpLCBXZWJob29rDQoNCkRyaXZlcnM6Tm92YSwg Q2luZGVyLCBOZXV0cm9uLEdsYW5jZSxPY3RhaXZhIGV0Yy4gYnkgT3BlbnN0YWNrc2RrIC4uLg0K DQoNCkludGVyZ2F0aW9uIHdpdGggb3RoZXIgcHJvamVjdDpBb2RoLCBaYXFhciwgKERpc2N1c3Np b24pVml0cmFnZQ0KDQoNCiAgICAuLi4NCg0KDQogIA0KDQoNCkEgcGFydCBvZiBTZW5saW4ncyBz ZWxmLWhlYWxpbmcgcHJlc2VudGF0aW9ucyBpbiBPcGVuc3RhY2sgU3VtbWl0Og0KDQoNCg0KICAg ICBodHRwczovL3d3dy5vcGVuc3RhY2sub3JnL3ZpZGVvcy9hdXN0aW4tMjAxNi9kZXBsb3ktYW4t ZWxhc3RpYy1yZXNpbGllbnQtbG9hZC1iYWxhbmNlZC1jbHVzdGVyLWluLTUtbWludXRlcy13aXRo LXNlbmxpbiANCg0KDQogICAgIGh0dHBzOi8vd3d3Lm9wZW5zdGFjay5vcmcvdmlkZW9zL2JhcmNl bG9uYS0yMDE2L29uLWJ1aWxkaW5nLWFuLWF1dG8taGVhbGluZy1yZXNvdXJjZS1jbHVzdGVyLXVz aW5nLXNlbmxpbiANCg0KDQogICAgIGh0dHBzOi8vd3d3Lm9wZW5zdGFjay5vcmcvdmlkZW9zL2Jv c3Rvbi0yMDE3L2ludGVncmF0aW9uLW9mLWVudGVycHJpc2UtbW9uaXRvcmluZy1wcm9kdWN0LXNl bmxpbi1hbmQtbWlzdHJhbC1mb3ItYXV0by1oZWFsaW5nIA0KDQoNCiAgICAgLi4uDQoNCg0KDQoN Cg0KDQoNClsxXWh0dHBzOi8vdi5xcS5jb20veC9wYWdlL2kwNTEyNXNmb25oLmh0bWwNCg0KDQpb Ml1odHRwczovL3YucXEuY29tL3gvcGFnZS90MDUxMnZvNnR3MS5odG1sDQoNCg0KWzNdaHR0cHM6 Ly92LnFxLmNvbS94L3BhZ2UveTA1MTJlaHFpaXEuaHRtbA0KDQoNCg0KDQoNCg0KDQrljp/lp4vp gq7ku7YNCg0KDQoNCuWPkeS7tuS6uu+8mkFkYW1TcGllcnMgPGFzcGllcnNAc3VzZS5jb20+DQrm lLbku7bkurrvvJpPcGVuU3RhY2sgU0lHcyBsaXN0IDxvcGVuc3RhY2stc2lnc0BsaXN0cy5vcGVu c3RhY2sub3JnPm9wZW5zdGFjay1kZXYgbWFpbGluZyBsaXN0IDxvcGVuc3RhY2stZGV2QGxpc3Rz Lm9wZW5zdGFjay5vcmc+b3BlbnN0YWNrLW9wZXJhdG9ycyBtYWlsaW5nIGxpc3QgPG9wZW5zdGFj ay1vcGVyYXRvcnNAbGlzdHMub3BlbnN0YWNrLm9yZz4NCuaXpSDmnJ8g77yaMjAxOOW5tDAy5pyI MjPml6UgMDU6MzENCuS4uyDpopgg77yaW29wZW5zdGFjay1kZXZdIFtzZWxmLWhlYWxpbmddW1BU R10gZXRoZXJwYWQgZm9yIFBURyBzZXNzaW9uIG9uc2VsZi1oZWFsaW5nDQoNCg0KSGkgYWxsLA0K DQpZdXNoaXJvIGtpbmRseSBjcmVhdGVkIGFuIGV0aGVycGFkIGZvciB0aGUgc2VsZi1oZWFsaW5n IFNJRyBzZXNzaW9uIGF0DQp0aGUgRHVibGluIFBURyBvbiBUdWVzZGF5IGFmdGVybm9vbiBuZXh0 IHdlZWssIGFuZCBJJ3ZlIGZsZXNoZWQgaXQgb3V0YSBiaXQ6DQoNCiAgICBodHRwczovL2V0aGVy cGFkLm9wZW5zdGFjay5vcmcvcC9zZWxmLWhlYWxpbmctcHRnLXJvY2t5DQoNCkFueW9uZSB3aXRo IGFuIGludGVyZXN0IGluIHNlbGYtaGVhbGluZyBpcyBvZiBjb3Vyc2UgdmVyeSB3ZWxjb21lIHRv DQphdHRlbmQgKG9yIGtlZXAgYW4gZXllIG9uIGl0IHJlbW90ZWx5ISkgIFRoaXMgU0lHIGlzIHN0 aWxsIHZlcnkgeW91bmcsDQpzbyBpdCdzIGEgZ3JlYXQgY2hhbmNlIGZvciB5b3UgdG8gc2hhcGUg dGhlIGRpcmVjdGlvbiBpdCB0YWtlcyA6LSkgIElmDQp5b3UgYXJlIGFibGUgdG8gYXR0ZW5kLCBw bGVhc2UgYWRkIHlvdXIgbmFtZSwgYW5kIGFsc28gZmVlbCBmcmVlIHRvDQphZGQgdG9waWNzIHdo aWNoIHlvdSB3b3VsZCBsaWtlIHRvIHNlZSBjb3ZlcmVkLg0KDQpJdCB3b3VsZCBiZSBwYXJ0aWN1 bGFybHkgaGVscGZ1bCBpZiBvcGVyYXRvcnMgY291bGQgcGFydGljaXBhdGUgYW5kDQpzaGFyZSB0 aGVpciBleHBlcmllbmNlcyBvZiB3aGF0IGlzIG9yIGlzbid0ICh5ZXQhKSB3b3JraW5nIHdpdGgN CnNlbGYtaGVhbGluZyBpbiBPcGVuU3RhY2ssIHNvIHRoYXQgdGhvc2Ugb2YgdXMgb24gdGhlIGRl dmVsb3BtZW50IHNpZGUNCmNhbiBhaW0gdG8gc29sdmUgdGhlIHJpZ2h0IHByb2JsZW1zIDotKQ0K DQpUaGFua3MsIGFuZCBzZWUgc29tZSBvZiB5b3UgaW4gRHVibGluIQ0KDQpBZGFtDQoNCl9fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fDQpPcGVuU3RhY2sgRGV2ZWxvcG1lbnQgTWFpbGluZyBMaXN0IChub3QgZm9y IHVzYWdlIHF1ZXN0aW9ucykNClVuc3Vic2NyaWJlOiBPcGVuU3RhY2stZGV2LXJlcXVlc3RAbGlz dHMub3BlbnN0YWNrLm9yZz9zdWJqZWN0OnVuc3Vic2NyaWJlDQpodHRwOi8vbGlzdHMub3BlbnN0 YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrLWRldg== -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack.org at sodarock.com Sat Feb 24 06:52:08 2018 From: openstack.org at sodarock.com (John Villalovos) Date: Fri, 23 Feb 2018 22:52:08 -0800 Subject: [openstack-dev] [ironic] Stepping down from Ironic core In-Reply-To: References: Message-ID: Very sorry to see you go :( You have been a great contributor to Ironic! And a pleasure to work with. Best of luck for you and your future :) John On Fri, Feb 23, 2018 at 1:02 AM, Vasyl Saienko wrote: > Hey Ironic community! > > Unfortunately I don't work on Ironic as much as I used to any more, so i'm > stepping down from core reviewers. > > So, thanks for everything everyone, it's been great to work with you > all for all these years!!! > > > Sincerely, > Vasyl Saienko > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sat Feb 24 11:53:59 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sat, 24 Feb 2018 11:53:59 +0000 Subject: [openstack-dev] [heat][ptg] PTG schedule Message-ID: Dear all I can’t wait for see you all in Dublin. I put a draft version of PTG schedule in [1]. We might change schedule after PTG started. Also we’re capable to set up real time video conference, so if you’re interested in join, please put your name in etherpad so I will know that I need to set up for it. And just for reminding, we will not host meeting next week. Hope you all have a nice flight [1] https://etherpad.openstack.org/p/heat-rocky-ptg -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Sat Feb 24 14:42:40 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 24 Feb 2018 15:42:40 +0100 Subject: [openstack-dev] [ptg] Release cycles, stable branch maintenance, LTS vs. downstream consumption models Message-ID: On Tuesday afternoon we'll have a discussion on release cycle duration, stable branch maintenance, and LTS vs. how OpenStack is consumed downstream. I set up an etherpad at: https://etherpad.openstack.org/p/release-cycles-ptg-rocky Please add the topics you'd like to cover. -- Thierry Carrez (ttx) From ksnhr.tech at gmail.com Sat Feb 24 15:12:41 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Sun, 25 Feb 2018 00:12:41 +0900 Subject: [openstack-dev] [heat][ptg] PTG schedule In-Reply-To: References: Message-ID: Hi Rico & team, Thanks a lot for the schedule. See you soon in Dublin :) Regards, Kaz 2018-02-24 20:53 GMT+09:00 Rico Lin : > Dear all > > I can’t wait for see you all in Dublin. > > I put a draft version of PTG schedule in [1]. > We might change schedule after PTG started. > > Also we’re capable to set up real time video conference, so if you’re > interested in join, please put your name in etherpad so I will know that I > need to set up for it. > > And just for reminding, we will not host meeting next week. > > Hope you all have a nice flight > > [1] https://etherpad.openstack.org/p/heat-rocky-ptg > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Sat Feb 24 16:17:54 2018 From: lhinds at redhat.com (Luke Hinds) Date: Sat, 24 Feb 2018 16:17:54 +0000 Subject: [openstack-dev] [tripleo] Draft schedule for PTG In-Reply-To: References: Message-ID: On Sat, Feb 24, 2018 at 12:15 AM, Emilien Macchi wrote: > > > On Fri, Feb 23, 2018 at 10:28 AM, Juan Antonio Osorio > wrote: > >> Could we change the Security talk to a day before Friday (both Thursday >> and Wednesday are fine)? Luke Hinds, the leader of the OpenStack Security >> group would like to join that discussion, but is not able to join that day. >> > > Done, moved to Thursday afternoon. Hope it works for you! > Thanks Emilien! > >> >> On Mon, Feb 19, 2018 at 6:37 PM, Emilien Macchi >> wrote: >> >>> Alex and I have been working on the agenda for next week, based on what >>> people proposed in topics. >>> >>> The draft calendar is visible here: >>> https://calendar.google.com/calendar/embed?src=tgpb5tv12mlu7 >>> kge5oqertje78%40group.calendar.google.com&ctz=Europe%2FDublin >>> >>> Also you can import the ICS from: >>> https://calendar.google.com/calendar/ical/tgpb5tv12mlu7kge5o >>> qertje78%40group.calendar.google.com/public/basic.ics >>> >>> Note this is a draft - we would love your feedback about the proposal. >>> >>> Some sessions might be too short or too long? You to tell us. (Please >>> look at event details for descriptions). >>> Also, for each session we need a "driver", please tell us if you >>> volunteer to do it. >>> >>> Please let us know here and we'll make adjustment, we have plenty of >>> room for it. >>> >>> Thanks! >>> -- >>> Emilien Macchi >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Juan Antonio Osorio R. >> e-mail: jaosorior at gmail.com >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Feb 24 20:56:43 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 24 Feb 2018 21:56:43 +0100 Subject: [openstack-dev] [ptl][all][python3] collecting current status of python 3 support in projects In-Reply-To: <1519341965-sup-8914@lrrr.local> References: <1519341965-sup-8914@lrrr.local> Message-ID: <74e92b1b-9aee-d3e2-306d-f1719ff6c7b7@debian.org> On 02/23/2018 12:29 AM, Doug Hellmann wrote: > I am trying to update the wiki document with the current state of > support for Python 3 projects as part of preparing for a discussion > about moving from "Python 2 first, then 3" to "Python 3 first, then > 2" development. > > I have added the missing libraries and services (at least those > managed by the release team) and done my best to figure out if there > are unit and functional/integration test jobs for each project. > > I need your help to verify the information I have collected and fill in > any gaps. > > Please look through the tables in [1] and if your projects' status > is out of date either update the page directly or email me (off > list) with the updates. > > Thanks! > Doug > > [1] https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects Hi Doug! As I've been working over the course of this week on switching all of Debian OpenStack to Py3, I have a bit experience with it. Unfortunately, I can only tell about unit tests, as I haven't run functional tests yet. Mostly, it's working well, and even in Python 3.6 in Sid. Though what I've seen often, is the tooling, and especially the sphinx docs, expecting Python 2 to be there. For example (and that's just an example, I'm not pointing finger at any project here...) generating the sphinx doc of Cinder calls binaries in the "tools" folder (ie: tools/generate_driver_list.py) which has "#! /usr/bin/env python" as first line. Of course, under my Python 3 only environment, it just fails miserably, and I had to patch the files. Another example would be Congress generating its lexer with some Python 2 type of exception (those with coma instead of "as"). I fixed that at build time with Victor's sixer tool (which really is awesome, thanks for it Victor!). Then there's Nova which annoyed me when generating the doc because of seemingly a bug in the Python 3 version of blockdiag (I may be wrong, but I don't think Nova itself is at fault here). I would have more details like this, but I guess you understand the general issue I'm raising: mostly we need to get rid of Python 2 completely, because otherwise, it's expected to be the default. So I'm really looking forward it happens upstream. LET'S KILL PYTHON 2 SUPPORT !!! :) More seriously, it'd be nice if all the docs tooling were effectively switching to Python 3, otherwise other issues will be reported. Also, it is annoying to see that manila-ui isn't Python 3 ready at all. I guess I'll simply skip manila-ui for this release (since all of Horizon is already switched to Python 3 on my side). I'm expecting to see more of these Horizon plugins to not be ready (I haven't completely finished that part...). I hope this helps, Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Sat Feb 24 23:50:15 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 24 Feb 2018 23:50:15 +0000 Subject: [openstack-dev] [ironic] Stepping down from Ironic core In-Reply-To: References: Message-ID: Hey Vasyl! This is saddening news, but thank you for letting us know. It has been a pleasure working with you! I’ve gone ahead and removed you from ironic-core. Until next time! -Julia On Fri, Feb 23, 2018 at 2:02 AM Vasyl Saienko wrote: > Hey Ironic community! > > Unfortunately I don't work on Ironic as much as I used to any more, so i'm > stepping down from core reviewers. > > So, thanks for everything everyone, it's been great to work with you > all for all these years!!! > > > Sincerely, > Vasyl Saienko > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Feb 25 09:29:25 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 25 Feb 2018 10:29:25 +0100 Subject: [openstack-dev] [ptg] Release cycles, stable branch maintenance, LTS vs. downstream consumption models In-Reply-To: References: Message-ID: <0d00a8e0-1913-19de-2a00-b5a14de09720@debian.org> On 02/24/2018 03:42 PM, Thierry Carrez wrote: > On Tuesday afternoon we'll have a discussion on release cycle duration, > stable branch maintenance, and LTS vs. how OpenStack is consumed downstream. > > I set up an etherpad at: > https://etherpad.openstack.org/p/release-cycles-ptg-rocky > > Please add the topics you'd like to cover. I really wish I could be there. Is there any ways I could attend remotely? Like someone with Skype or something... Cheers, Thomas Goirand (zigo) From tobias at citynetwork.se Sun Feb 25 13:21:30 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Sun, 25 Feb 2018 14:21:30 +0100 Subject: [openstack-dev] [publiccloud-wg][PTG] Schedule Message-ID: Hi folks, Here is the schedule for the Public Cloud WG parts at the PTG next week. Come int person or join remote. We will try to get some link up for remotes to join. Will get back with more information around that as soon as I have something to share. https://etherpad.openstack.org/p/publiccloud-wg-ptg-rocky Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From fm577c at att.com Sun Feb 25 19:27:53 2018 From: fm577c at att.com (MONTEIRO, FELIPE C) Date: Sun, 25 Feb 2018 19:27:53 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: <7D5E803080EF7047850D309B333CB94E22D7045F@GAALPA1MSGUSRBI.ITServices.sbc.com> Patrole has also started migration to Zuul v3 native with: https://review.openstack.org/#/c/547820/ Thanks, Felipe > -----Original Message----- > From: Michael Johnson [mailto:johnsomor at gmail.com] > Sent: Wednesday, February 21, 2018 11:35 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs > to Zuul v3 native > > FYI, Octavia has started to use the new devstack-tempest parent here: > https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_- > 23_c_543034_17_zuul.d_jobs.yaml&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=YJByaMybmEINehssnk52zVCM_4dNVjqWsVAweWDo_1Y > &e= > There is a lot of work still left to do on our tempest-plugin but we > are making progress. > > Thanks for the communication out! > > Michael > > > On Tue, Feb 20, 2018 at 1:22 PM, Andrea Frittoli > wrote: > > Dear all, > > > > updates: > > > > - host/group vars: zuul now supports declaring host and group vars in the > > job definition [0][1] - thanks corvus and infra team! > > This is a great help towards writing the devstack and tempest base > > multinode jobs [2][3] > > * NOTE: zuul merges dict variables through job inheritance. Variables in > > host/group_vars override global ones. I will write some examples further > > clarify this. > > > > - stable/pike: devstack ansible changes have been backported to > stable/pike, > > so we can now run zuulv3 jobs against stable/pike too - thank you tosky! > > next change in progress related to pike is to provide tempest-full-pike > > for branchless repositories [4] > > > > - documentation: devstack now publishes documentation on its ansible > roles > > [5]. > > More devstack documentation patches are in progress to provide jobs > > reference, examples and a job migration how-to [6]. > > > > > > Andrea Frittoli (andreaf) > > > > [0] > > https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_infra_zuul_user_config.html-23attr-2Djob.host- > 5Fvars&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=CosTB1Amrpom- > Num7uRT76rcbUKLEtEtsy3wUAQ6cUw&e= > > [1] > > https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_infra_zuul_user_config.html-23attr-2Djob.group- > 5Fvars&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=pRDPJ8v49Gv5- > CbLE151Mo3gKCbns2PvYjqGodo_JOU&e= > > [2] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_545696_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=huCs3ubYx5iKKmPZUyeI11cUpnsIPq99RQPspDyB-Ng&e= > > [3] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_545724_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=RN0hTSHYSxXIBtGhcRXOO4BRV9OTrrvj-aUnhyFdf6c&e= > > [4] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_546196_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=84M8P63oHq8oodoI2Oufe- > XM07YQl6beCfve0GWU6uI&e= > > [5] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_devstack_latest_roles.html&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=jwdCu8h63MicciUk_uoI_2M3iCI02g3Ou1kz8SoA840&e= > > [6] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_545992_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=lnpEoAuvoAC5rJS-PyRsGjoJkvQqIR68ZO5uUnL4XGs&e= > > > > > > On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli > > > wrote: > >> > >> Dear all, > >> > >> updates: > >> - tempest-full-queens and tempest-full-py3-queens are now available for > >> testing of branchless repositories [0]. They are used for tempest and > >> devstack-gate. If you own a tempest plugin in a branchless repo, you may > >> consider adding similar jobs to your plugin if you use it for tests on > >> stable/queen as well. > >> - if you have migrated jobs based on devstack-tempest please let me > know, > >> I'm building reference docs and I'd like to include as many examples as > >> possible > >> - work on multi-node is in progress, but not ready still - you can follow > >> the patches in the multinode branch [1] > >> - updates on some of the points from my previous email are inline below > >> > >> Andrea Frittoli (andreaf) > >> > >> [0] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_tempest_tree_.zuul.yaml- > 23n73&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=AJSVcf8OfdJvORTXJkaX0icEunv-JHuNrTRptvPYQ2Y&e= > >> [1] > >> https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_q_status-3Aopen-2B-2Bbranch-3Amaster- > 2Btopic-3Amultinode&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=2xPznmETr17tXPzZs5nG1gPoMp-VJtjK-x8FAp4j4Sw&e= > >> > >> > >> On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli > >> wrote: > >>> > >>> Dear all, > >>> > >>> this is the first or a series of ~regular updates on the migration of > >>> Tempest / Grenade jobs to Zuul v3 native. > >>> > >>> The QA team together with the infra team are working on providing the > >>> OpenStack community with a set of base Tempest / Grenade jobs that > can be > >>> used as a basis to write new CI jobs / migrate existing legacy ones with a > >>> minimal effort and very little or no Ansible knowledge as a precondition. > >>> > >>> The effort is tracked in an etherpad [0]; I'm trying to keep the etherpad > >>> up to date but it may not always be a source of truth. > >>> > >>> Useful jobs available so far: > >>> - devstack-tempest [0] is a simple tempest/devstack job that runs > >>> keystone glance nova cinder neutron swift and tempest *smoke* filter > >>> - tempest-full [1] is similar but runs a full test run - it replaces the > >>> legacy tempest-dsvm-neutron-full from the integrated gate > >>> - tempest-full-py3 [2] runs a full test run on python3 - it replaces the > >>> legacy tempest-dsvm-py35 > >> > >> > >> Some more details on this topic: what I did not mention in my previous > >> email is that the autogenerated Tempest / Grenade CI jobs (legacy-* > >> playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To > >> create Zuul V3 Tempest / Grenade native jobs for your projects you need > to > >> through away the legacy playbooks and defined new jobs in .zuul.yaml, as > >> documented in the zuul v3 docs [2]. > >> The parent job for a single node Tempest job will usually be > >> devstack-tempest. Example migrated jobs are avilable, for instance: [3] > [4]. > >> > >> [2] > >> https://urldefense.proofpoint.com/v2/url?u=https- > 3A__docs.openstack.org_infra_manual_zuulv3.html-23howto-2Dupdate- > 2Dlegacy-2Djobs&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=rDi-IqrYMnGm8V88nG0JdH1ejomqU6kZTPg8BSfkajw&e= > >> [3] > >> https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_sahara-2Dtests_tree_.zuul.yaml- > 23n21&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=XI4c3uIrY_iUIG3HaF3FWjqTchQJ0ZfihkB4ophJ_eg&e= > >> [4] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_543048_5&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=i8Q7IeOXLXGLVqjN09OSJ3QZDQDKIhhTYl7qTSycXUI&e= > >> > >>> > >>> > >>> Both tempest-full and tempest-full-py3 are part of integrated-gate > >>> templates, starting from stable/queens on. > >>> The other stable branches still run the legacy jobs, since devstack > >>> ansible changes have not been backported (yet). If we do backport it will > be > >>> up to pike maximum. > >>> > >>> Those jobs work in single node mode only at the moment. Enabling > >>> multinode via job configuration only require a new Zuul feature [4][5] > that > >>> should be available soon; the new feature allows defining host/group > >>> variables in the job definition, which means setting variables which are > >>> specific to one host or a group of hosts. > >>> Multinode DVR and Ironic jobs will require migration of the ovs-* roles > >>> form devstack-gate to devstack as well. > >>> > >>> Grenade jobs (single and multinode) are still legacy, even if the > >>> *legacy* word has been removed from the name. > >>> They are currently temporarily hosted in the neutron repository. They > are > >>> going to be implemented as Zuul v3 native in the grenade repository. > >>> > >>> Roles are documented, and a couple of migration tips for > DEVSTACK_GATE > >>> flags is available in the etherpad [0]; more comprehensive examples / > docs > >>> will be available as soon as possible. > >>> > >>> Please let me know if you find this update useful and / or if you would > >>> like to see different information in it. > >>> I will send further updates as soon as significant changes / new features > >>> become available. > >>> > >>> Andrea Frittoli (andreaf) > >>> > >>> [0] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__etherpad.openstack.org_p_zuulv3-2Dnative-2Ddevstack-2Dtempest- > 2Djobs&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=Ewtex0vw5RLTBex7QJNQ6eUbmHUdh5MhcCtoPHk0uG0 > &e= > >>> [1] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_tempest_tree_.zuul.yaml- > 23n1&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=5g4anU3EjRcmzbcxn-Akok3rTbPkBP0uV0O5zhY-3WE&e= > >>> [2] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_tempest_tree_.zuul.yaml- > 23n29&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=zkppFgApyxdVFnXMTN-kRfMKZy9rTn8cU6lXE8aJevo&e= > >>> [3] https://urldefense.proofpoint.com/v2/url?u=http- > 3A__git.openstack.org_cgit_openstack_tempest_tree_.zuul.yaml- > 23n47&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=n_RaCD-NgwJWUDALLwdnKNA- > PUgYvmKBfTLyuv8nDJY&e= > >>> [4] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__etherpad.openstack.org_p_zuulv3-2Dgroup- > 2Dvariables&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=fhD2S9eRhLbnlJuVgIuhAjKrDeQTQXkC9nGWmquPbfA&e= > >>> [5] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__review.openstack.org_-23_c_544562_&d=DwIGaQ&c=LFYZ- > o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=J0wFJqOGHJK80Nu2oP-JCJK2XScK4oB5ZelxXZMjZU8&e= > > > > > > > ______________________________________________________________ > ____________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=4sL5HYcNUCX9D6E6Y5NllrEurvUzu9J6eawI9WjQ26g&e= > > > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack- > 2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=GL712YbQ1dO5c4PRjp- > cePgOMxie8Iw1Rm6vREW7qaI&m=H7JhB2wyLMo_XoBYdgZGwOfTHS2dW5 > Y9_N3SF6xAKLk&s=4sL5HYcNUCX9D6E6Y5NllrEurvUzu9J6eawI9WjQ26g&e= From doug at doughellmann.com Sun Feb 25 19:30:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 25 Feb 2018 19:30:35 +0000 Subject: [openstack-dev] [ptl][all][python3] collecting current status of python 3 support in projects In-Reply-To: <74e92b1b-9aee-d3e2-306d-f1719ff6c7b7@debian.org> References: <1519341965-sup-8914@lrrr.local> <74e92b1b-9aee-d3e2-306d-f1719ff6c7b7@debian.org> Message-ID: <1519586863-sup-6419@lrrr.local> Excerpts from Thomas Goirand's message of 2018-02-24 21:56:43 +0100: > On 02/23/2018 12:29 AM, Doug Hellmann wrote: > > I am trying to update the wiki document with the current state of > > support for Python 3 projects as part of preparing for a discussion > > about moving from "Python 2 first, then 3" to "Python 3 first, then > > 2" development. > > > > I have added the missing libraries and services (at least those > > managed by the release team) and done my best to figure out if there > > are unit and functional/integration test jobs for each project. > > > > I need your help to verify the information I have collected and fill in > > any gaps. > > > > Please look through the tables in [1] and if your projects' status > > is out of date either update the page directly or email me (off > > list) with the updates. > > > > Thanks! > > Doug > > > > [1] https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Hi Doug! > > As I've been working over the course of this week on switching all of > Debian OpenStack to Py3, I have a bit experience with it. Unfortunately, > I can only tell about unit tests, as I haven't run functional tests yet. > > Mostly, it's working well, and even in Python 3.6 in Sid. > > Though what I've seen often, is the tooling, and especially the sphinx > docs, expecting Python 2 to be there. For example (and that's just an > example, I'm not pointing finger at any project here...) generating the > sphinx doc of Cinder calls binaries in the "tools" folder (ie: > tools/generate_driver_list.py) which has "#! /usr/bin/env python" as > first line. Of course, under my Python 3 only environment, it just fails > miserably, and I had to patch the files. > > Another example would be Congress generating its lexer with some Python > 2 type of exception (those with coma instead of "as"). I fixed that at > build time with Victor's sixer tool (which really is awesome, thanks for > it Victor!). > > Then there's Nova which annoyed me when generating the doc because of > seemingly a bug in the Python 3 version of blockdiag (I may be wrong, > but I don't think Nova itself is at fault here). > > I would have more details like this, but I guess you understand the > general issue I'm raising: mostly we need to get rid of Python 2 > completely, because otherwise, it's expected to be the default. So I'm > really looking forward it happens upstream. > > LET'S KILL PYTHON 2 SUPPORT !!! :) > > More seriously, it'd be nice if all the docs tooling were effectively > switching to Python 3, otherwise other issues will be reported. > > Also, it is annoying to see that manila-ui isn't Python 3 ready at all. > I guess I'll simply skip manila-ui for this release (since all of > Horizon is already switched to Python 3 on my side). I'm expecting to > see more of these Horizon plugins to not be ready (I haven't completely > finished that part...). > > I hope this helps, > Cheers, > > Thomas Goirand (zigo) > Thanks for bringing this up, Thomas. You make a good point about addressing python 2 use outside of just our test jobs, and the issue is easily actionable. We do have the ability to specify a python 3 version of the doc build job, so maybe we can take some time this cycle to move all projects over to using it and resolve the issues you've spotted. Does anyone want to volunteer to help with that migration? I'll bet a lot of projects will just work, and the ones that don't shouldn't be difficult if the unit tests run under the python 3. Doug From kennelson11 at gmail.com Sun Feb 25 22:51:24 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Sun, 25 Feb 2018 22:51:24 +0000 Subject: [openstack-dev] [First Contact] [SIG] Rocky PTG Planning In-Reply-To: References: Message-ID: Hello! Can't wait to see you all tomorrow! I'm thinking we get started at 9:00? If you want to show up early and hang out, the room (Canal Cafe) is available starting at 8:30 AM. -Kendall (diablo_rojo) On Tue, 9 Jan 2018, 8:30 pm Kendall Nelson, wrote: > Hello Everyone :) > > I put us down for one day at the PTG and wanted to get a jump start on > discussion planning. > > I created an etherpad[1] and wrote down some topics to get the ball > rolling. Please feel free to expand on them if there are other details you > feel we need to talk about or add new ones as you see fit. > > Also, please add your name to the 'Planned Attendance' section if you are > thinking of attending. > > Thanks! > > -Kendall (diablo_rojo) > > [1] https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shilla.saebi at gmail.com Sun Feb 25 23:52:16 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Sun, 25 Feb 2018 18:52:16 -0500 Subject: [openstack-dev] User Committee Election Results - February 2018 Message-ID: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Feb 26 00:05:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 26 Feb 2018 01:05:26 +0100 Subject: [openstack-dev] Xen API dom0 plugin code in os-xenapi doesn't have Python 3 support Message-ID: Hi, I hope this reaches Bob Ball, I'm bringing the patch review conversation here, because I don't think Gerrit is a good enough medium. For a bit of context, I've opened this patch review: https://review.openstack.org/544809 which fixes Python 3 support within os_xenapi/dom0/etc/xapi.d/plugins. It feels like this piece of code is: - out of place - not Python 3 compliant - annoying downstream distributions - never used in OpenStack. Bob, yes, I can remove the code in the Debian package, that's not hard to do so, it's just annoying. But why would you add such a burden on all and every downstream distribution? Wouldn't there be a better place to put the CentOS Python code? Couldn't we get an RPM package to install on all XenAPI servers before they join the OpenStack cluster? To me, there's 2 alternatives: 1/ Accept this patch, so that at least the code builds/installs in downstream distributions 2/ Remove the code completely from os-xenapi I'd prefer the later, but I don't mind much. Your thoughts? Cheers, Thomas Goirand (zigo) From snow19642003 at yahoo.com Mon Feb 26 01:30:33 2018 From: snow19642003 at yahoo.com (William Genovese) Date: Mon, 26 Feb 2018 01:30:33 +0000 (UTC) Subject: [openstack-dev] [openstack-community] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <1169833579.5810488.1519608633249@mail.yahoo.com> Hi- Can you tell me who is chairing the Financial Services Industry OpenStack Cloud Committee? I thought there was one at one time? I'd like to (at a minimum) sign up and contribute to this. Thank you, William (Bill) M Genovese (威廉 (比尔) 迈克尔 · 吉诺维斯) Vice President Corporate Strategy Planning | Banking, Financial Services andIT Services Solutions HUAWEI TECHNOLOGIES CO., LTD. Bantian, Longgang District Shenzhen 518129 P.R. China www.huawei.com Mobile: +86 132-4373-0940 (CN) +1 704-906-3558 (US) Email: william.michael.genovese at huawei.com Wechat: Wechat: Bill277619782016 LinkedIn: https://www.linkedin.com/in/wgenovese On ‎Monday‎, ‎February‎ ‎26‎, ‎2018‎ ‎07‎:‎53‎:‎58‎ ‎AM, Shilla Saebi wrote: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Feb 26 01:38:28 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Mon, 26 Feb 2018 14:38:28 +1300 Subject: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core Message-ID: Hi team, I would like to propose adding Zhenyu Yang(yangzhenyu) for the Zaqar core team. He has been an awesome contributor since joining the Zaqar team. And now he is the most active non-core contributor on Zaqar projects for the last 180 days[1]. Zhenyu has great technical expertise and contributed many high quality patches. I'm sure he would be an excellent addition to the team. If no one objects, I'll proceed and add him in a week from now. Thanks. [1] http://stackalytics.com/report/contribution/zaqar-group/180 -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From zhang.lei.fly at gmail.com Mon Feb 26 01:53:13 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 26 Feb 2018 09:53:13 +0800 Subject: [openstack-dev] [kolla]no meeting at Feb 28 because of PTG Message-ID: Due to the PTG in Dublin, next meeting at Feb 28 is canceled. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From edgar.magana at workday.com Mon Feb 26 03:38:54 2018 From: edgar.magana at workday.com (Edgar Magana) Date: Mon, 26 Feb 2018 03:38:54 +0000 Subject: [openstack-dev] [User-committee] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <876B0B60-ADB0-4CE4-B1FC-5110622D08BE@workday.com> Congratulations Folks! We have a great team to continue the growing of the UC. Your first action is to assign a chair for the UC and let the board of directors about your election. I wish you all the best! Edgar Magana On Feb 25, 2018, at 3:53 PM, Shilla Saebi > wrote: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla _______________________________________________ User-committee mailing list User-committee at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=uryEDva3eeLA17jjrm73DWw4CrzTezr7HxiJNWpJAs0&s=9y-_pHwzl3ADBVlN7GbhaF8HYVQGvTQjkEvEotC9jfw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Feb 26 04:54:36 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 26 Feb 2018 04:54:36 +0000 Subject: [openstack-dev] [cyborg]Dublin PTG schedule update Message-ID: Hi Team, There haven been several updates that I recorded on the etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky [1] Team Photo time has been changed [2] I've setup the sub-etherpad for each topic. Topic owner please fill in these etherpads with the content that you want to discuss [3] Plz confirm your availability for team dinner on Tuesday night [4] Room is Suite 665 -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Mon Feb 26 07:48:56 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Mon, 26 Feb 2018 07:48:56 +0000 Subject: [openstack-dev] [vitrage][ptg] Vitrage PTG agenda Message-ID: <4802C810-55AF-4FB1-8B57-D7EBBCABAC79@nokia.com> Hi, Vitrage PTG discussions will start today at 9:00 Dublin time. You are all welcome to join. We will hold the discussions on webex, so you can join remotely as well. PTG etherpad: https://etherpad.openstack.org/p/vitrage-ptg-rocky See you soon, Ifat From jaypipes at gmail.com Mon Feb 26 08:15:40 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 26 Feb 2018 08:15:40 +0000 Subject: [openstack-dev] [nova] [placement] resource providers update 18-07 In-Reply-To: <7791b0e6-87b0-2466-64f9-58061fb6245e@gmail.com> References: <7791b0e6-87b0-2466-64f9-58061fb6245e@gmail.com> Message-ID: <8bd9a575-4766-760a-0181-5b490d9f14db@gmail.com> On 02/24/2018 02:17 AM, Matt Riedemann wrote: > On 2/16/2018 7:54 AM, Chris Dent wrote: >> Before I get to the meat of this week's report, I'd like to request >> some feedback from readers on how to improve the report. Over its >> lifetime it has grown and it has now reached the point that while it >> tries to give the impression of being complete, it never actually is, >> and is a fair chunk of work to get that way. >> >> So perhaps there is a way to make it a bit more focused and thus bit >> more actionable. If there are parts you can live without or parts you >> can't live without, please let me know. >> >> One idea I've had is to do some kind of automation to make it what >> amounts to a dashboard, but I'm not super inclined to do that because >> the human curation has been useful for me. If it's not useful for >> anyone else, however, then that's something to consider. > > -1 on a dashboard unless it's just something like a placement-specific > review dashboard, but you'd have to star or somehow label > placement-specific patches. I appreciate the human thought/comments on > the various changes for context. As do I. Thank you, Chris, for doing this week after week. It may not seem like it, but these emails are immensely useful for me. Best, -jay From zhang.lei.fly at gmail.com Mon Feb 26 08:39:16 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 26 Feb 2018 16:39:16 +0800 Subject: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency Message-ID: Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging, i found it is caused by package dependency. *Background* Since we have no time to upgrade ceph from Jewel to Luminous at the end of pike cycle, we pinned Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel and ceph luminous are on the different repos. But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though ceph luminous still exists on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping qemu to 2.5 to use ceph Jewel[1]. And this works since then. *Now Issue* But recently, UCA changed the libvirt-daemon package dependency, and added following, Package: libvirt-daemon Version: 3.6.0-1ubuntu6.2~cloud0 ... Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<< 1:2.10+dfsg-0ubuntu3.4~) It requires qemu 2.10 now. So dependency is broken and nova-libvirt container is failed to build. *Possible Solution* I think there two possible ways now, but none of them is good. 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in ceph-* container 2. Bump ceph from jewel to luminous. But this breaks the backport policy, obviously. So any idea on this? [0] https://review.openstack.org/534149 [1] https://review.openstack.org/#/c/526931/ -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From shake.chen at gmail.com Mon Feb 26 08:51:20 2018 From: shake.chen at gmail.com (Shake Chen) Date: Mon, 26 Feb 2018 16:51:20 +0800 Subject: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency In-Reply-To: References: Message-ID: I prefer to the option 2. On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang wrote: > Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging, > i found it is caused by > package dependency. > > > *Background* > > Since we have no time to upgrade ceph from Jewel to Luminous at the end of > pike cycle, we pinned > Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel and > ceph luminous are on > the different repos. > > But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though > ceph luminous still exists > on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping qemu > to 2.5 to use ceph Jewel[1]. > And this works since then. > > > *Now Issue* > > But recently, UCA changed the libvirt-daemon package dependency, and added > following, > > Package: libvirt-daemon > Version: 3.6.0-1ubuntu6.2~cloud0 > ... > Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<< > 1:2.10+dfsg-0ubuntu3.4~) > > It requires qemu 2.10 now. So dependency is broken and nova-libvirt > container is failed to build. > > > *Possible Solution* > > I think there two possible ways now, but none of them is good. > > 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in > ceph-* container > 2. Bump ceph from jewel to luminous. But this breaks the backport policy, > obviously. > > So any idea on this? > > [0] https://review.openstack.org/534149 > [1] https://review.openstack.org/#/c/526931/ > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From guy.shaanan at nokia.com Mon Feb 26 09:10:00 2018 From: guy.shaanan at nokia.com (Shaanan, Guy (Nokia - IL/Kfar Sava)) Date: Mon, 26 Feb 2018 09:10:00 +0000 Subject: [openstack-dev] [mistral] What's new in latest CloudFlow? Message-ID: CloudFlow [1] is an open-source web-based GUI tool that helps visualize and debug Mistral workflows. With the latest release [2] of CloudFlow (v0.5.0) you can: * Visualize the flow of workflow executions * Identify the execution path of a single task in huge workflows * Search Mistral by any entity ID * Identify long-running tasks at a glance * Easily distinguish between simple task (an action) and a sub workflow execution * Follow tasks with a `retry` and/or `with-items` * 1-click to copy task's input/output/publish/params values * See complete workflow definition and per task definition YAML * And more... CloudFlow is easy to install and run (and even easier to upgrade), and we appreciate any feedback and contribution. CloudFlow currently supports unauthenticated Mistral or authentication with KeyCloak (openid-connect implementation). A support for Keystone will be added in the near future. You can try CloudFlow now on your Mistral Pike/Queens, or try it on the online demo [3]. [1] https://github.com/nokia/CloudFlow [2] https://github.com/nokia/CloudFlow/releases/latest [3] http://yaqluator.com:8000 Thanks, ----------------------------------------------------- Guy Shaanan Full Stack Web Developer, CI & Internal Tools CloudBand @ Nokia Software, Nokia, ISRAEL Guy.Shaanan at nokia.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Feb 26 09:31:00 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 26 Feb 2018 09:31:00 +0000 Subject: [openstack-dev] [User-committee] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <0e96f3a69451488aabf5c9de9aaa2a1e@AUSX13MPS308.AMER.DELL.COM> Congrats to new committee members. And thanks for great job for previous ones. From: Shilla Saebi [mailto:shilla.saebi at gmail.com] Sent: Sunday, February 25, 2018 5:52 PM To: user-committee ; OpenStack Mailing List ; OpenStack Operators ; OpenStack Dev ; community at lists.openstack.org Subject: [User-committee] User Committee Election Results - February 2018 Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Feb 26 09:40:57 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 26 Feb 2018 09:40:57 +0000 Subject: [openstack-dev] [Openstack-operators] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <5A93D629.2000704@openstack.org> Congrats everyone! And thanks to the UC Election Committee for managing :) Cheers, Jimmy > Shilla Saebi > February 25, 2018 at 11:52 PM > Hello Everyone! > > Please join me in congratulating 3 newly elected members of the User > Committee (UC)! The winners for the 3 seats are: > > Melvin Hillsman > Amy Marrich > Yih Leong Sun > > Full results can be found here: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 > > Election details can also be found here: > https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > Thank you to all of the candidates, and to all of you who voted and/or > promoted the election! > > Shilla > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Feb 26 09:57:02 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 26 Feb 2018 18:57:02 +0900 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK Message-ID: Hi neutron and openstacksdk team, This mail proposes to change the first priority of neutron-related python binding to OpenStack SDK rather than neutronclient python bindings. I think it is time to start this as OpenStack SDK became a official project in Queens. [Current situations and problems] Network OSC commands are categorized into two parts: OSC and neutronclient OSC plugin. Commands implemented in OSC consumes OpenStack SDK and commands implemented as neutronclient OSC plugin consumes neutronclient python bindings. This brings tricky situation that some features are supported only in OpenStack SDK and some features are supported only in neutronclient python bindings. [Proposal] The proposal is to implement all neutron features in OpenStack SDK as the first citizen, and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs. Once this is achieved, users of OpenStack SDK users can see all network related features. [Migration plan] The migration starts from Rocky (if we agree). New features should be supported in OpenStack SDK and OSC/neutronclient OSC plugin as the first priority. If new feature depends on neutronclient python bindings, it can be implemented in neutornclient python bindings first and they are ported as part of existing feature transition. Existing features only supported in neutronclient python bindings are ported into OpenStack SDK, and neutronclient OSC plugin will consume them once they are implemented in OpenStack SDK. [FAQ] 1. Will neutornclient python bindings be removed in future? Different from "neutron" CLI, as of now, there is no plan to drop the neutronclient python bindings. Not a small number of projects consumes it, so it will be maintained as-is. The only change is that new features are implemented in OpenStack SDK first and enhancements of neutronclient python bindings will be minimum. 2. Should projects that consume neutronclient python bindings switch to OpenStack SDK? Necessarily not. It depends on individual projects. Projects like nova that consumes small set of neutron features can continue to use neutronclient python bindings. Projects like horizon or heat that would like to support a wide range of features might be better to switch to OpenStack SDK. 
3. .... Thanks, Akihiro From giuseppe.decandia at gmail.com Mon Feb 26 09:59:27 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Mon, 26 Feb 2018 03:59:27 -0600 Subject: [openstack-dev] [infra] Please delete branch "notif" of project tatu Message-ID: Hi OpenStack-Infra Team, Please delete branch "notif" of openstack/tatu. The project was recently created/imported from my private repo and only the master branch is needed for the community project. thanks for your help! Pino -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Feb 26 10:02:14 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 26 Feb 2018 02:02:14 -0800 Subject: [openstack-dev] [infra] Please delete branch "notif" of project tatu In-Reply-To: References: Message-ID: <1519639334.2750431.1283418360.643DFFA6@webmail.messagingengine.com> On Mon, Feb 26, 2018, at 1:59 AM, Pino de Candia wrote: > Hi OpenStack-Infra Team, > > Please delete branch "notif" of openstack/tatu. > > The project was recently created/imported from my private repo and only the > master branch is needed for the community project. Done. Just for historical purposes the sha1 of the HEAD of the branch was 9ecbb46b8e645fbf2450d4bca09c8f4040341a85. Clark From mordred at inaugust.com Mon Feb 26 10:14:20 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 26 Feb 2018 10:14:20 +0000 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: References: Message-ID: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > Hi neutron and openstacksdk team, > > This mail proposes to change the first priority of neutron-related > python binding to OpenStack SDK rather than neutronclient python > bindings. > I think it is time to start this as OpenStack SDK became a official > project in Queens. ++ > [Current situations and problems] > > Network OSC commands are categorized into two parts: OSC and > neutronclient OSC plugin. > Commands implemented in OSC consumes OpenStack SDK > and commands implemented as neutronclient OSC plugin consumes > neutronclient python bindings. > This brings tricky situation that some features are supported only in > OpenStack SDK and some features are supported only in neutronclient > python bindings. > > [Proposal] > > The proposal is to implement all neutron features in OpenStack SDK as > the first citizen, > and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs. > > Once this is achieved, users of OpenStack SDK users can see all > network related features. > > [Migration plan] > > The migration starts from Rocky (if we agree). > > New features should be supported in OpenStack SDK and > OSC/neutronclient OSC plugin as the first priority. If new feature > depends on neutronclient python bindings, it can be implemented in > neutornclient python bindings first and they are ported as part of > existing feature transition. > > Existing features only supported in neutronclient python bindings are > ported into OpenStack SDK, > and neutronclient OSC plugin will consume them once they are > implemented in OpenStack SDK. I think this is a great idea. We've got a bunch of good functional/integrations tests in the sdk gate as well that we can start running on neutron patches so that we don't lose cross-gating. > [FAQ] > > 1. Will neutornclient python bindings be removed in future? > > Different from "neutron" CLI, as of now, there is no plan to drop the > neutronclient python bindings. > Not a small number of projects consumes it, so it will be maintained as-is. > The only change is that new features are implemented in OpenStack SDK first and > enhancements of neutronclient python bindings will be minimum. > > 2. Should projects that consume neutronclient python bindings switch > to OpenStack SDK? > > Necessarily not. It depends on individual projects. > Projects like nova that consumes small set of neutron features can > continue to use neutronclient python bindings. > Projects like horizon or heat that would like to support a wide range > of features might be better to switch to OpenStack SDK. We've got a PTG session with Heat to discuss potential wider-use of SDK (and have been meaning to reach our to horizon as well) Perhaps a good first step would be to migrate the heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from neutronclient to SDK. There's already an heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in Heat. I started a patch to migrate senlin from senlinclient (which is just a thin wrapper around sdk): https://review.openstack.org/#/c/532680/ For those of you who are at the PTG, I'll be giving an update on SDK after lunch on Wednesday. I'd also be more than happy to come chat about this more in the neutron room if that's useful to anybody. Monty From rocha.porto at gmail.com Mon Feb 26 10:17:44 2018 From: rocha.porto at gmail.com (Ricardo Rocha) Date: Mon, 26 Feb 2018 11:17:44 +0100 Subject: [openstack-dev] [magnum][keystone] clusters, trustees and projects Message-ID: Hi. We have an issue on the way Magnum uses keystone trusts. Magnum clusters are created in a given project using HEAT, and require a trust token to communicate back with OpenStack services - there is also integration with Kubernetes via a cloud provider. This trust belongs to a given user, not the project, so whenever we disable the user's account - for example when a user leaves the organization - the cluster becomes unhealthy as the trust is no longer valid. Given the token is available in the cluster nodes, accessible by users, a trust linked to a service account is also not a viable solution. Is there an existing alternative for this kind of use case? I guess what we might need is a trust that is linked to the project. I believe the same issue would be there using application credentials, as the ownership is similar. Cheers, Ricardo From slawek at kaplonski.pl Mon Feb 26 10:19:52 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Mon, 26 Feb 2018 11:19:52 +0100 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: I also agree that it is good idea and I would be very happy to help with such migration :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Monty Taylor w dniu 26.02.2018, o godz. 11:14: > > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: >> Hi neutron and openstacksdk team, >> This mail proposes to change the first priority of neutron-related >> python binding to OpenStack SDK rather than neutronclient python >> bindings. >> I think it is time to start this as OpenStack SDK became a official >> project in Queens. > > ++ > >> [Current situations and problems] >> Network OSC commands are categorized into two parts: OSC and >> neutronclient OSC plugin. >> Commands implemented in OSC consumes OpenStack SDK >> and commands implemented as neutronclient OSC plugin consumes >> neutronclient python bindings. >> This brings tricky situation that some features are supported only in >> OpenStack SDK and some features are supported only in neutronclient >> python bindings. >> [Proposal] >> The proposal is to implement all neutron features in OpenStack SDK as >> the first citizen, >> and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs. >> Once this is achieved, users of OpenStack SDK users can see all >> network related features. >> [Migration plan] >> The migration starts from Rocky (if we agree). >> New features should be supported in OpenStack SDK and >> OSC/neutronclient OSC plugin as the first priority. If new feature >> depends on neutronclient python bindings, it can be implemented in >> neutornclient python bindings first and they are ported as part of >> existing feature transition. >> Existing features only supported in neutronclient python bindings are >> ported into OpenStack SDK, >> and neutronclient OSC plugin will consume them once they are >> implemented in OpenStack SDK. > > I think this is a great idea. We've got a bunch of good functional/integrations tests in the sdk gate as well that we can start running on neutron patches so that we don't lose cross-gating. > >> [FAQ] >> 1. Will neutornclient python bindings be removed in future? >> Different from "neutron" CLI, as of now, there is no plan to drop the >> neutronclient python bindings. >> Not a small number of projects consumes it, so it will be maintained as-is. >> The only change is that new features are implemented in OpenStack SDK first and >> enhancements of neutronclient python bindings will be minimum. >> 2. Should projects that consume neutronclient python bindings switch >> to OpenStack SDK? >> Necessarily not. It depends on individual projects. >> Projects like nova that consumes small set of neutron features can >> continue to use neutronclient python bindings. >> Projects like horizon or heat that would like to support a wide range >> of features might be better to switch to OpenStack SDK. > > We've got a PTG session with Heat to discuss potential wider-use of SDK (and have been meaning to reach our to horizon as well) Perhaps a good first step would be to migrate the heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from neutronclient to SDK. There's already an heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in Heat. I started a patch to migrate senlin from senlinclient (which is just a thin wrapper around sdk): https://review.openstack.org/#/c/532680/ > > For those of you who are at the PTG, I'll be giving an update on SDK after lunch on Wednesday. I'd also be more than happy to come chat about this more in the neutron room if that's useful to anybody. > > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mordred at inaugust.com Mon Feb 26 10:31:18 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 26 Feb 2018 10:31:18 +0000 Subject: [openstack-dev] [sdk] Nominating Adrian Turjak for core Message-ID: <50dbb7ac-eccf-dc73-6ab2-a6c647060b57@inaugust.com> Hey everybody, I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's an Operator/End User and brings *excellent* deep/strange/edge-condition bugs. He also has a great understanding of the mechanics between Resource/Proxy objects and is super helpful in verifying fixes work in the real world. It's worth noting that Adrian's overall review 'stats' aren't what it traditionally associated with a 'core', but I think this is a good example that life shouldn't be driven by stackalytics and the being a core reviewer is about understanding the code base and being able to evaluate proposed changes. From my POV, Adrian more than qualifies. Thoughts? Monty From slawek at kaplonski.pl Mon Feb 26 10:46:59 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Mon, 26 Feb 2018 11:46:59 +0100 Subject: [openstack-dev] [sdk] Nominating Adrian Turjak for core In-Reply-To: <50dbb7ac-eccf-dc73-6ab2-a6c647060b57@inaugust.com> References: <50dbb7ac-eccf-dc73-6ab2-a6c647060b57@inaugust.com> Message-ID: <5CE6678F-86C8-4896-8EA7-C3CAB45E3836@kaplonski.pl> +1 — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Monty Taylor w dniu 26.02.2018, o godz. 11:31: > > Hey everybody, > > I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's an Operator/End User and brings *excellent* deep/strange/edge-condition bugs. He also has a great understanding of the mechanics between Resource/Proxy objects and is super helpful in verifying fixes work in the real world. > > It's worth noting that Adrian's overall review 'stats' aren't what it traditionally associated with a 'core', but I think this is a good example that life shouldn't be driven by stackalytics and the being a core reviewer is about understanding the code base and being able to evaluate proposed changes. From my POV, Adrian more than qualifies. > > Thoughts? > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rosario.disomma.ml at gmail.com Mon Feb 26 10:55:18 2018 From: rosario.disomma.ml at gmail.com (Rosario Di Somma) Date: Mon, 26 Feb 2018 11:55:18 +0100 Subject: [openstack-dev] [sdk] Nominating Adrian Turjak for core In-Reply-To: <5CE6678F-86C8-4896-8EA7-C3CAB45E3836@kaplonski.pl> References: <50dbb7ac-eccf-dc73-6ab2-a6c647060b57@inaugust.com> <5CE6678F-86C8-4896-8EA7-C3CAB45E3836@kaplonski.pl> Message-ID: +1 On Mon, Feb 26, 2018 at 11:46, Sławomir Kapłoński wrote: +1 — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Monty Taylor w dniu 26.02.2018, o godz. 11:31: > > Hey everybody, > > I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's an Operator/End User and brings *excellent* deep/strange/edge-condition bugs. He also has a great understanding of the mechanics between Resource/Proxy objects and is super helpful in verifying fixes work in the real world. > > It's worth noting that Adrian's overall review 'stats' aren't what it traditionally associated with a 'core', but I think this is a good example that life shouldn't be driven by stackalytics and the being a core reviewer is about understanding the code base and being able to evaluate proposed changes. From my POV, Adrian more than qualifies. > > Thoughts? > Monty > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Mon Feb 26 10:55:27 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 26 Feb 2018 16:25:27 +0530 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor wrote: > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > >> Hi neutron and openstacksdk team, >> >> This mail proposes to change the first priority of neutron-related >> python binding to OpenStack SDK rather than neutronclient python >> bindings. >> I think it is time to start this as OpenStack SDK became a official >> project in Queens. >> > > ++ > > > [Current situations and problems] >> >> Network OSC commands are categorized into two parts: OSC and >> neutronclient OSC plugin. >> Commands implemented in OSC consumes OpenStack SDK >> and commands implemented as neutronclient OSC plugin consumes >> neutronclient python bindings. >> This brings tricky situation that some features are supported only in >> OpenStack SDK and some features are supported only in neutronclient >> python bindings. >> >> [Proposal] >> >> The proposal is to implement all neutron features in OpenStack SDK as >> the first citizen, >> and the neutronclient OSC plugin consumes corresponding OpenStack SDK >> APIs. >> >> Once this is achieved, users of OpenStack SDK users can see all >> network related features. >> >> [Migration plan] >> >> The migration starts from Rocky (if we agree). >> >> New features should be supported in OpenStack SDK and >> OSC/neutronclient OSC plugin as the first priority. If new feature >> depends on neutronclient python bindings, it can be implemented in >> neutornclient python bindings first and they are ported as part of >> existing feature transition. >> >> Existing features only supported in neutronclient python bindings are >> ported into OpenStack SDK, >> and neutronclient OSC plugin will consume them once they are >> implemented in OpenStack SDK. >> > > I think this is a great idea. We've got a bunch of good > functional/integrations tests in the sdk gate as well that we can start > running on neutron patches so that we don't lose cross-gating. > > [FAQ] >> >> 1. Will neutornclient python bindings be removed in future? >> >> Different from "neutron" CLI, as of now, there is no plan to drop the >> neutronclient python bindings. >> Not a small number of projects consumes it, so it will be maintained >> as-is. >> The only change is that new features are implemented in OpenStack SDK >> first and >> enhancements of neutronclient python bindings will be minimum. >> >> 2. Should projects that consume neutronclient python bindings switch >> to OpenStack SDK? >> >> Necessarily not. It depends on individual projects. >> Projects like nova that consumes small set of neutron features can >> continue to use neutronclient python bindings. >> Projects like horizon or heat that would like to support a wide range >> of features might be better to switch to OpenStack SDK. >> > > We've got a PTG session with Heat to discuss potential wider-use of SDK > (and have been meaning to reach our to horizon as well) Perhaps a good > first step would be to migrate the heat.engine.clients.os.neutron:NeutronClientPlugin > code in Heat from neutronclient to SDK. Yeah, this would only be possible after openstacksdk supports all neutron features as mentioned in the proposal. Note: We had initially added the OpenStackSDKPlugin in heat to support neutron segments and were thinking of doing all new neutron stuff with openstacksdk. However, we soon realised that it's not possible when implementing neutron trunk support and had to drop the idea. > There's already an heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin > plugin in Heat. I started a patch to migrate senlin from senlinclient > (which is just a thin wrapper around sdk): https://review.openstack.org/# > /c/532680/ > > For those of you who are at the PTG, I'll be giving an update on SDK after > lunch on Wednesday. I'd also be more than happy to come chat about this > more in the neutron room if that's useful to anybody. > > Monty > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Mon Feb 26 11:25:16 2018 From: chris at openstack.org (Chris Hoge) Date: Mon, 26 Feb 2018 11:25:16 +0000 Subject: [openstack-dev] [k8s][ptg] SIG-K8s Scheduling for Dublin PTG In-Reply-To: <2C2B8E52-0F61-459D-93B7-541BC3B054C3@openstack.org> References: <2C2B8E52-0F61-459D-93B7-541BC3B054C3@openstack.org> Message-ID: <00A27643-6889-43EC-B08E-E115085F92A6@openstack.org> Initial scheduling is live for sig-k8s work at the PTG. Tuesday morning is going to be devoted to external provider migration and documentation. Late morning includes a Kolla sesison. The afternoon is mostly free, with a session set aside for testing. If you have topics you'd like to have sessions on please add them to the schedule. If you’re working on k8s within the OpenStack community, there is a team photo at scheduled for 3:30. https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg Chris > On Feb 21, 2018, at 7:41 PM, Chris Hoge wrote: > > SIG-K8s has a planning etherpad available for the Dublin PTG. We have > space scheduled for Tuesday, with approximately eight forty-minute work > blocks. For the K8s on OpenStack side of things, we've identified a core > set of priorities that we'll be working on that day, including: > > * Moving openstack-cloud-controller-manager into OpenStack git repo. > * Enabling and improving testing across multiple platforms. > * Identifying documentation gaps. > > Some of these items have some collaboration points with the Infra and > QA teams. If members of those teams could help us identify when they > would be available to work on repository creation and enabling testing, > that would help us to schedule the appropriate times for those topics. > > The work of the SIG-K8s groups also covers other Kubernetes and OpenStack > integrations, including deploying OpenStack on top of Kubernetes. If > anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun > teams would like to schedule cross-project work sessions, please add your > requests and preferred times to the planning etherpad. Additionally, I > can be available to attend work sessions for any of those projects. > > https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg > > Thanks! > Chris > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mordred at inaugust.com Mon Feb 26 11:26:14 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 26 Feb 2018 11:26:14 +0000 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: On 02/26/2018 10:55 AM, Rabi Mishra wrote: > On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor > wrote: > > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > > Hi neutron and openstacksdk team, > > This mail proposes to change the first priority of neutron-related > python binding to OpenStack SDK rather than neutronclient python > bindings. > I think it is time to start this as OpenStack SDK became a official > project in Queens. > > > ++ > > > [Current situations and problems] > > Network OSC commands are categorized into two parts: OSC and > neutronclient OSC plugin. > Commands implemented in OSC consumes OpenStack SDK > and commands implemented as neutronclient OSC plugin consumes > neutronclient python bindings. > This brings tricky situation that some features are supported > only in > OpenStack SDK and some features are supported only in neutronclient > python bindings. > > [Proposal] > > The proposal is to implement all neutron features in OpenStack > SDK as > the first citizen, > and the neutronclient OSC plugin consumes corresponding > OpenStack SDK APIs. > > Once this is achieved, users of OpenStack SDK users can see all > network related features. > > [Migration plan] > > The migration starts from Rocky (if we agree). > > New features should be supported in OpenStack SDK and > OSC/neutronclient OSC plugin as the first priority. If new feature > depends on neutronclient python bindings, it can be implemented in > neutornclient python bindings first and they are ported as part of > existing feature transition. > > Existing features only supported in neutronclient python > bindings are > ported into OpenStack SDK, > and neutronclient OSC plugin will consume them once they are > implemented in OpenStack SDK. > > > I think this is a great idea. We've got a bunch of good > functional/integrations tests in the sdk gate as well that we can > start running on neutron patches so that we don't lose cross-gating. > > [FAQ] > > 1. Will neutornclient python bindings be removed in future? > > Different from "neutron" CLI, as of now, there is no plan to > drop the > neutronclient python bindings. > Not a small number of projects consumes it, so it will be > maintained as-is. > The only change is that new features are implemented in > OpenStack SDK first and > enhancements of neutronclient python bindings will be minimum. > > 2. Should projects that consume neutronclient python bindings switch > to OpenStack SDK? > > Necessarily not. It depends on individual projects. > Projects like nova that consumes small set of neutron features can > continue to use neutronclient python bindings. > Projects like horizon or heat that would like to support a wide > range > of features might be better to switch to OpenStack SDK. > > > We've got a PTG session with Heat to discuss potential wider-use of > SDK (and have been meaning to reach our to horizon as well) Perhaps > a good first step would be to migrate the > heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from > neutronclient to SDK. > > > Yeah, this would only be possible after openstacksdk supports all > neutron features as mentioned in the proposal. ++ > Note: We had initially added the OpenStackSDKPlugin in heat to support > neutron segments and were thinking of doing all new neutron stuff with > openstacksdk. However, we soon realised that it's not possible when > implementing neutron trunk support and had to drop the idea. Maybe we start converting one thing at a time and when we find something sdk doesn't support we should be able to add it pretty quickly... which should then also wind up improving the sdk layer. > There's already an > heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in > Heat. I started a patch to migrate senlin from senlinclient (which > is just a thin wrapper around sdk): > https://review.openstack.org/#/c/532680/ > > > For those of you who are at the PTG, I'll be giving an update on SDK > after lunch on Wednesday. I'd also be more than happy to come chat > about this more in the neutron room if that's useful to anybody. > > Monty > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards, > Rabi Mishra > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rbowen at redhat.com Mon Feb 26 11:32:33 2018 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 26 Feb 2018 11:32:33 +0000 Subject: [openstack-dev] [PTG] Project interviews at the PTG In-Reply-To: References: Message-ID: <36237781-311a-82f5-ed02-9b88b9aa6619@redhat.com> A HUGE thank you to all of the people who have signed up to do interviews at the PTG. Tuesday is now completely full, but I still have space/time on the remaining days. I have set up on the 4th floor. Turn left when you exit the lifts, and I'm set up by the couches in the break area. Please check the schedule first before dropping in, but if I'm available, we can do a walk-in if you have the time. Thanks! --Rich http://youtube.com/RDOCommunity On 02/19/2018 02:12 PM, Rich Bowen wrote: > I promise this is the last time I'll bug you about this. (Except > on-site, of course!) > > I still have lots and lots of space for team/project/whatever interviews > at the PTG. You can sign up at > https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 > > > You can see some examples of previous interviews at > http://youtube.com/RDOCommunity > > For the most part, interviews focus on what your team accomplished > during the Queens cycle and what you want to work on in Rocky. However, > we can also talk about other things like governance, community, related > projects, licensing, or anything else that you feel is related to the > OpenStack community. > > I encourage you to talk with your team, and find 2 or 3 people who can > speak most eloquently about what you are trying to do, and find a time > that works for you. > > I'll also have the schedules posted on-site, so you can sign up there, > if you're still unsure of your schedule. But signing up ahead of time > lets me know whether Wednesday is really a vacation day. ;-) > > See you in Dublin! > -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From mordred at inaugust.com Mon Feb 26 11:41:18 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 26 Feb 2018 11:41:18 +0000 Subject: [openstack-dev] [sdk] Cleaning up openstacksdk core team Message-ID: Hey all, A bunch of stuff has changed in SDK recently, and a few of the historical sdk core folks have also not been around. I'd like to propose removing the following people from the core team: Everett Towes Jesse Noller Richard Theis Terry Howe They're all fantastic humans but they haven't had any activity in quite some time - and not since all the changes of the sdk/shade merge. As is normal in OpenStack land, they'd all be welcome back if they found themselves in a position to dive in again. Any objections? Monty From aaronzhu1121 at gmail.com Mon Feb 26 11:51:44 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Mon, 26 Feb 2018 11:51:44 +0000 Subject: [openstack-dev] [Murano]No meeting at Feb 28 Message-ID: Hi Teams, Let's cancel meetings at 28 Feb because of PTG. Cheers, Rong Zhu From gkotton at vmware.com Mon Feb 26 13:39:59 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 26 Feb 2018 13:39:59 +0000 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: One of the concerns here is that the openstack client does not enable one to configure extensions that are not part of the core reference architecture. So any external third part that tries to have any etension added will not be able to leverage the openstack client. This is a major pain point. On 2/26/18, 1:26 PM, "Monty Taylor" wrote: On 02/26/2018 10:55 AM, Rabi Mishra wrote: > On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor > wrote: > > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > > Hi neutron and openstacksdk team, > > This mail proposes to change the first priority of neutron-related > python binding to OpenStack SDK rather than neutronclient python > bindings. > I think it is time to start this as OpenStack SDK became a official > project in Queens. > > > ++ > > > [Current situations and problems] > > Network OSC commands are categorized into two parts: OSC and > neutronclient OSC plugin. > Commands implemented in OSC consumes OpenStack SDK > and commands implemented as neutronclient OSC plugin consumes > neutronclient python bindings. > This brings tricky situation that some features are supported > only in > OpenStack SDK and some features are supported only in neutronclient > python bindings. > > [Proposal] > > The proposal is to implement all neutron features in OpenStack > SDK as > the first citizen, > and the neutronclient OSC plugin consumes corresponding > OpenStack SDK APIs. > > Once this is achieved, users of OpenStack SDK users can see all > network related features. > > [Migration plan] > > The migration starts from Rocky (if we agree). > > New features should be supported in OpenStack SDK and > OSC/neutronclient OSC plugin as the first priority. If new feature > depends on neutronclient python bindings, it can be implemented in > neutornclient python bindings first and they are ported as part of > existing feature transition. > > Existing features only supported in neutronclient python > bindings are > ported into OpenStack SDK, > and neutronclient OSC plugin will consume them once they are > implemented in OpenStack SDK. > > > I think this is a great idea. We've got a bunch of good > functional/integrations tests in the sdk gate as well that we can > start running on neutron patches so that we don't lose cross-gating. > > [FAQ] > > 1. Will neutornclient python bindings be removed in future? > > Different from "neutron" CLI, as of now, there is no plan to > drop the > neutronclient python bindings. > Not a small number of projects consumes it, so it will be > maintained as-is. > The only change is that new features are implemented in > OpenStack SDK first and > enhancements of neutronclient python bindings will be minimum. > > 2. Should projects that consume neutronclient python bindings switch > to OpenStack SDK? > > Necessarily not. It depends on individual projects. > Projects like nova that consumes small set of neutron features can > continue to use neutronclient python bindings. > Projects like horizon or heat that would like to support a wide > range > of features might be better to switch to OpenStack SDK. > > > We've got a PTG session with Heat to discuss potential wider-use of > SDK (and have been meaning to reach our to horizon as well) Perhaps > a good first step would be to migrate the > heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from > neutronclient to SDK. > > > Yeah, this would only be possible after openstacksdk supports all > neutron features as mentioned in the proposal. ++ > Note: We had initially added the OpenStackSDKPlugin in heat to support > neutron segments and were thinking of doing all new neutron stuff with > openstacksdk. However, we soon realised that it's not possible when > implementing neutron trunk support and had to drop the idea. Maybe we start converting one thing at a time and when we find something sdk doesn't support we should be able to add it pretty quickly... which should then also wind up improving the sdk layer. > There's already an > heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in > Heat. I started a patch to migrate senlin from senlinclient (which > is just a thin wrapper around sdk): > https://review.openstack.org/#/c/532680/ > > > For those of you who are at the PTG, I'll be giving an update on SDK > after lunch on Wednesday. I'd also be more than happy to come chat > about this more in the neutron room if that's useful to anybody. > > Monty > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards, > Rabi Mishra > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Feb 26 14:16:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Feb 2018 14:16:50 +0000 Subject: [openstack-dev] [sdk] Cleaning up openstacksdk core team In-Reply-To: References: Message-ID: <1519654575-sup-1710@lrrr.local> Excerpts from Monty Taylor's message of 2018-02-26 11:41:18 +0000: > Hey all, > > A bunch of stuff has changed in SDK recently, and a few of the > historical sdk core folks have also not been around. I'd like to propose > removing the following people from the core team: > > Everett Towes > Jesse Noller > Richard Theis > Terry Howe > > They're all fantastic humans but they haven't had any activity in quite > some time - and not since all the changes of the sdk/shade merge. As is > normal in OpenStack land, they'd all be welcome back if they found > themselves in a position to dive in again. > > Any objections? > > Monty > +1 for cleanup. As you say, we can add them back easily if we need to. Doug From thierry at openstack.org Mon Feb 26 14:19:01 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 26 Feb 2018 15:19:01 +0100 Subject: [openstack-dev] [ptg] Release cycles, stable branch maintenance, LTS vs. downstream consumption models In-Reply-To: <0d00a8e0-1913-19de-2a00-b5a14de09720@debian.org> References: <0d00a8e0-1913-19de-2a00-b5a14de09720@debian.org> Message-ID: <37d5b3c0-6ba4-c38c-4100-3c1e3cfa6edb@openstack.org> Thomas Goirand wrote: > On 02/24/2018 03:42 PM, Thierry Carrez wrote: >> On Tuesday afternoon we'll have a discussion on release cycle duration, >> stable branch maintenance, and LTS vs. how OpenStack is consumed downstream. >> >> I set up an etherpad at: >> https://etherpad.openstack.org/p/release-cycles-ptg-rocky >> >> Please add the topics you'd like to cover. > > I really wish I could be there. Is there any ways I could attend > remotely? Like someone with Skype or something... You should send me a summary of your position on the topics (or add it to the etherpad) so that we can make sure to take your position into account. As for remote participation, I'll see if I can find a volunteer to patch you in. Worst case scenario we'll document on the etherpad and you could ask questions / add extra input there. -- Thierry Carrez (ttx) From dabarren at gmail.com Mon Feb 26 15:01:28 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 26 Feb 2018 16:01:28 +0100 Subject: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency In-Reply-To: References: Message-ID: I prefer option 1, breaking stable policy is not good for users. They will be forced to upgrade a major ceph version during a minor upgrade, which is not good and not excepted to be done ever. Regards 2018-02-26 9:51 GMT+01:00 Shake Chen : > I prefer to the option 2. > > On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang > wrote: > >> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging, >> i found it is caused by >> package dependency. >> >> >> *Background* >> >> Since we have no time to upgrade ceph from Jewel to Luminous at the end >> of pike cycle, we pinned >> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel >> and ceph luminous are on >> the different repos. >> >> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though >> ceph luminous still exists >> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping >> qemu to 2.5 to use ceph Jewel[1]. >> And this works since then. >> >> >> *Now Issue* >> >> But recently, UCA changed the libvirt-daemon package dependency, and >> added following, >> >> Package: libvirt-daemon >> Version: 3.6.0-1ubuntu6.2~cloud0 >> ... >> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<< >> 1:2.10+dfsg-0ubuntu3.4~) >> >> It requires qemu 2.10 now. So dependency is broken and nova-libvirt >> container is failed to build. >> >> >> *Possible Solution* >> >> I think there two possible ways now, but none of them is good. >> >> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in >> ceph-* container >> 2. Bump ceph from jewel to luminous. But this breaks the backport policy, >> obviously. >> >> So any idea on this? >> >> [0] https://review.openstack.org/534149 >> [1] https://review.openstack.org/#/c/526931/ >> >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Shake Chen > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Feb 26 15:45:35 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 26 Feb 2018 15:45:35 +0000 Subject: [openstack-dev] [magnum][keystone] clusters, trustees and projects In-Reply-To: References: Message-ID: <2119f48f-c2b6-0087-3ad0-0bd77b210cd5@gmail.com> On 02/26/2018 10:17 AM, Ricardo Rocha wrote: > Hi. > > We have an issue on the way Magnum uses keystone trusts. > > Magnum clusters are created in a given project using HEAT, and require > a trust token to communicate back with OpenStack services - there is > also integration with Kubernetes via a cloud provider. > > This trust belongs to a given user, not the project, so whenever we > disable the user's account - for example when a user leaves the > organization - the cluster becomes unhealthy as the trust is no longer > valid. Given the token is available in the cluster nodes, accessible > by users, a trust linked to a service account is also not a viable > solution. > > Is there an existing alternative for this kind of use case? I guess > what we might need is a trust that is linked to the project. This was proposed in the original application credential specification [0] [1]. The problem is that you're sharing an authentication mechanism with multiple people when you associate it to the life cycle of a project. When a user is deleted or removed from the project, nothing would stop them from accessing OpenStack APIs if the application credential or trust isn't rotated out. Even if the credential or trust were scoped to the project's life cycle, it would need to be rotated out and replaced when users come and go for the same reason. So it would still be associated to the user life cycle, just indirectly. Otherwise you're allowing unauthorized access to something that should be protected. If you're at the PTG - we will be having a session on application credentials tomorrow (Tuesday) afternoon [2] in the identity-integration room [3]. [0] https://review.openstack.org/#/c/450415/ [1] https://review.openstack.org/#/c/512505/ [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg [3] http://ptg.openstack.org/ptg.html > > I believe the same issue would be there using application credentials, > as the ownership is similar. > > Cheers, > Ricardo > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From inc007 at gmail.com Mon Feb 26 16:53:20 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Mon, 26 Feb 2018 08:53:20 -0800 Subject: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency In-Reply-To: References: Message-ID: I'm for option 1 definitely. accidental ceph upgrade during routine minor version upgrade is something we don't want. We will need big warning about this version mismatch in release notes. On 26 February 2018 at 07:01, Eduardo Gonzalez wrote: > I prefer option 1, breaking stable policy is not good for users. They will > be forced to upgrade a major ceph version during a minor upgrade, which is > not good and not excepted to be done ever. > > Regards > > > 2018-02-26 9:51 GMT+01:00 Shake Chen : >> >> I prefer to the option 2. >> >> On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang >> wrote: >>> >>> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging, >>> i found it is caused by >>> package dependency. >>> >>> >>> *Background* >>> >>> Since we have no time to upgrade ceph from Jewel to Luminous at the end >>> of pike cycle, we pinned >>> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel >>> and ceph luminous are on >>> the different repos. >>> >>> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though >>> ceph luminous still exists >>> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping >>> qemu to 2.5 to use ceph Jewel[1]. >>> And this works since then. >>> >>> >>> *Now Issue* >>> >>> But recently, UCA changed the libvirt-daemon package dependency, and >>> added following, >>> >>> Package: libvirt-daemon >>> Version: 3.6.0-1ubuntu6.2~cloud0 >>> ... >>> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<< >>> 1:2.10+dfsg-0ubuntu3.4~) >>> >>> It requires qemu 2.10 now. So dependency is broken and nova-libvirt >>> container is failed to build. >>> >>> >>> *Possible Solution* >>> >>> I think there two possible ways now, but none of them is good. >>> >>> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in >>> ceph-* container >>> 2. Bump ceph from jewel to luminous. But this breaks the backport policy, >>> obviously. >>> >>> So any idea on this? >>> >>> [0] https://review.openstack.org/534149 >>> [1] https://review.openstack.org/#/c/526931/ >>> >>> -- >>> Regards, >>> Jeffrey Zhang >>> Blog: http://xcodest.me >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Shake Chen >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ruby.loo at intel.com Mon Feb 26 17:32:17 2018 From: ruby.loo at intel.com (Loo, Ruby) Date: Mon, 26 Feb 2018 17:32:17 +0000 Subject: [openstack-dev] [ironic] Stepping down from Ironic core In-Reply-To: References: Message-ID: <2DBF0A16-213D-4F5A-A608-25929309696E@intel.com> Hey Vasyl, Thanks for all your contributions to Ironic! I hope that you'll still find a bit of time for us :-) --ruby From: Vasyl Saienko Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, February 23, 2018 at 9:02 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [ironic] Stepping down from Ironic core Hey Ironic community! Unfortunately I don't work on Ironic as much as I used to any more, so i'm stepping down from core reviewers. So, thanks for everything everyone, it's been great to work with you all for all these years!!! Sincerely, Vasyl Saienko -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Mon Feb 26 19:39:00 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 26 Feb 2018 19:39:00 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner Message-ID: Hey Kolla, Hope you're all enjoying Dublin so far :) Some have expressed interest in getting together for a team meal, how does Thursday sound? Please reply to this with +1/-1 and I can see about booking something. Cheers, -Paul From dabarren at gmail.com Mon Feb 26 19:57:59 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 26 Feb 2018 19:57:59 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: +1 On Mon, Feb 26, 2018, 7:40 PM Paul Bourke wrote: > Hey Kolla, > > Hope you're all enjoying Dublin so far :) Some have expressed interest > in getting together for a team meal, how does Thursday sound? Please > reply to this with +1/-1 and I can see about booking something. > > Cheers, > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Mon Feb 26 20:09:27 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 26 Feb 2018 13:09:27 -0700 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover Message-ID: Hey Everyone, Something I've been looking at with Cinder's replication (sort of the next step in the evolution if you will) is the ability to refresh/renew in-use volumes that were part of a migration event. We do something similar with extend-volume on the Nova side through the use of Instance Actions I believe, and I'm wondering how folks would feel about the same sort of thing being added upon failover/failback for replicated Cinder volumes? If you're not familiar, Cinder allows a volume to be replicated to multiple physical backend devices, and in the case of a DR situation an Operator can failover a backend device (or even a single volume). This process results in Cinder making some calls to the respective backend device, it doing it's magic and updating the Cinder Volume Model with new attachment info. This works great, except for the case of users that have a bunch of in-use volumes on that particular backend. We don't currently do anything to refresh/update them, so it's a manual process of running through a detach/attach loop. I'm interested in looking at creating a mechanism to "refresh" all of the existing/current attachments as part of the Cinder Failover process. Curious if anybody has any thoughts on this, or if anyone has already done something related to this topic? Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Feb 26 21:13:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Feb 2018 21:13:44 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: Message-ID: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> On 2/26/2018 8:09 PM, John Griffith wrote: > I'm interested in looking at creating a mechanism to "refresh" all of > the existing/current attachments as part of the Cinder Failover process. What would be involved on the nova side for the refresh? I'm guessing disconnect/connect the volume via os-brick (or whatever for non-libvirt drivers), resulting in a new host connector from os-brick that nova would use to update the existing volume attachment for the volume/server instance combo? -- Thanks, Matt From john.griffith8 at gmail.com Mon Feb 26 21:28:23 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 26 Feb 2018 14:28:23 -0700 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: On Mon, Feb 26, 2018 at 2:13 PM, Matt Riedemann wrote: > On 2/26/2018 8:09 PM, John Griffith wrote: > >> I'm interested in looking at creating a mechanism to "refresh" all of the >> existing/current attachments as part of the Cinder Failover process. >> > > What would be involved on the nova side for the refresh? I'm guessing > disconnect/connect the volume via os-brick (or whatever for non-libvirt > drivers), resulting in a new host connector from os-brick that nova would > use to update the existing volume attachment for the volume/server instance > combo? ​Yep, that's pretty much exactly what I'm thinking about / looking at. I'm also wondering how much of the extend actions we can leverage here, but I haven't looked through all of that yet.​ > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 26 21:38:07 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 26 Feb 2018 21:38:07 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: +1 On 26 Feb 2018 7:58 p.m., "Eduardo Gonzalez" wrote: > +1 > > On Mon, Feb 26, 2018, 7:40 PM Paul Bourke wrote: > >> Hey Kolla, >> >> Hope you're all enjoying Dublin so far :) Some have expressed interest >> in getting together for a team meal, how does Thursday sound? Please >> reply to this with +1/-1 and I can see about booking something. >> >> Cheers, >> -Paul >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dardelean at cloudbasesolutions.com Mon Feb 26 21:40:29 2018 From: dardelean at cloudbasesolutions.com (Dan Ardelean) Date: Mon, 26 Feb 2018 21:40:29 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: +1 On 26 Feb 2018, at 21:38, Mark Goddard > wrote: +1 On 26 Feb 2018 7:58 p.m., "Eduardo Gonzalez" > wrote: +1 On Mon, Feb 26, 2018, 7:40 PM Paul Bourke > wrote: Hey Kolla, Hope you're all enjoying Dublin so far :) Some have expressed interest in getting together for a team meal, how does Thursday sound? Please reply to this with +1/-1 and I can see about booking something. Cheers, -Paul __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Feb 26 21:47:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Feb 2018 21:47:53 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: On 2/26/2018 9:28 PM, John Griffith wrote: > I'm also wondering how much of the extend actions we can leverage here, > but I haven't looked through all of that yet.​ The os-server-external-events API in nova is generic. We'd just add a new microversion to register a new tag for this event. Like the extend volume event, the volume ID would be provided as input to the API and nova would use that to identify the instance + volume to refresh on the compute host. We'd also register a new instance action / event record so that users could poll the os-instance-actions API for completion of the operation. -- Thanks, Matt From john.griffith8 at gmail.com Mon Feb 26 21:52:57 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Mon, 26 Feb 2018 14:52:57 -0700 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: On Mon, Feb 26, 2018 at 2:47 PM, Matt Riedemann wrote: > On 2/26/2018 9:28 PM, John Griffith wrote: > >> I'm also wondering how much of the extend actions we can leverage here, >> but I haven't looked through all of that yet.​ >> > > The os-server-external-events API in nova is generic. We'd just add a new > microversion to register a new tag for this event. Like the extend volume > event, the volume ID would be provided as input to the API and nova would > use that to identify the instance + volume to refresh on the compute host. > > We'd also register a new instance action / event record so that users > could poll the os-instance-actions API for completion of the operation. ​Yeah, it seems like this would be pretty handy with what's there. So are folks good with that? Wanted to make sure there's nothing contentious there before I propose a spec on the Nova and Cinder sides. If you think it seems at least worth proposing I'll work on it and get something ready as a welcome home from Dublin gift for everyone :) ​ > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Mon Feb 26 22:05:09 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Mon, 26 Feb 2018 23:05:09 +0100 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: <88958309-EB74-4493-B0BA-D22DDB7273AE@betacloud-solutions.de> +1 Thanks for the organisation. > On 26. Feb 2018, at 20:39, Paul Bourke wrote: > > Hey Kolla, > > Hope you're all enjoying Dublin so far :) Some have expressed interest in getting together for a team meal, how does Thursday sound? Please reply to this with +1/-1 and I can see about booking something. > > Cheers, > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From Greg.Waines at windriver.com Mon Feb 26 23:22:30 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 26 Feb 2018 23:22:30 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests Message-ID: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> · I have a commercial OpenStack product that I would like to claim compliancy with RefStack · Is it sufficient to claim compliance with only the “OpenStack Powered Platform” TESTS ? o i.e. https://refstack.openstack.org/#/guidelines o i.e. the ~350-ish compute + object-storage tests · OR · Should I be using the COMPLETE API Test Set ? o i.e. the > 1,000 tests from various domains that get run if you do not specify a test-list Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvoelker at vmware.com Tue Feb 27 09:12:03 2018 From: mvoelker at vmware.com (Mark Voelker) Date: Tue, 27 Feb 2018 09:12:03 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests In-Reply-To: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> References: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> Message-ID: <379509E4-F39C-43BB-999C-5C3F53554640@vmware.com> Hi Greg, Only the tests listed in the Guidelines are required to pass to get an OpenStack Powered logo and trademark usage license from the OpenStack Foundation (you must also use the designated sections of upstream code specified in the Guideline documents). However vendors are strongly encouraged to run all the tests. Doing so provides some data to the Interop Working Group about how many products support capabilities that aren’t on the required list today, but might be considered in the future. If you have any questions about the process, please contact interop at openstack.org and the Foundation staff will be happy to help! At Your Service, Mark T. Voelker > On Feb 26, 2018, at 11:22 PM, Waines, Greg wrote: > > > · I have a commercial OpenStack product that I would like to claim compliancy with RefStack > · Is it sufficient to claim compliance with only the “OpenStack Powered Platform” TESTS ? > o i.e. https://refstack.openstack.org/#/guidelines > o i.e. the ~350-ish compute + object-storage tests > · OR > · Should I be using the COMPLETE API Test Set ? > o i.e. the > 1,000 tests from various domains that get run if you do not specify a test-list > > Greg. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From wangxiyuan1007 at gmail.com Tue Feb 27 09:22:23 2018 From: wangxiyuan1007 at gmail.com (Xiyuan Wang) Date: Tue, 27 Feb 2018 09:22:23 +0000 Subject: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core In-Reply-To: References: Message-ID: +1, zhenyu has done a lot of useful features in Zaqar. Such as delay queue and message abstract support. Some others are on the list for Rocky as well. Great work. 2018-02-26 1:38 GMT+00:00 Fei Long Wang : > Hi team, > > I would like to propose adding Zhenyu Yang(yangzhenyu) for the Zaqar core > team. He has been an awesome contributor since joining the Zaqar team. And > now he is the most active non-core contributor on Zaqar projects for the > last 180 days[1]. Zhenyu has great technical expertise and contributed many > high quality patches. I'm sure he would be an excellent addition to the > team. If no one objects, I'll proceed and add him in a week from now. > Thanks. > > [1] http://stackalytics.com/report/contribution/zaqar-group/180 > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zh.f at outlook.com Tue Feb 27 09:32:29 2018 From: zh.f at outlook.com (Zhang Fan) Date: Tue, 27 Feb 2018 09:32:29 +0000 Subject: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core Message-ID: Big +1 for Zhenyu Yang. Although I am not a contributor of zaqar, but as his former colleague, I do know that he's definitely an excellent developer. Best wishes. Fan Zhang Original Message Sender: Xiyuan Wang Recipient: OpenStack Development Mailing List (not for usage questions) Date: Tuesday, Feb 27, 2018 17:22 Subject: Re: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core +1, zhenyu has done a lot of useful features in Zaqar. Such as delay queue and message abstract support. Some others are on the list for Rocky as well. Great work. 2018-02-26 1:38 GMT+00:00 Fei Long Wang >: Hi team, I would like to propose adding Zhenyu Yang(yangzhenyu) for the Zaqar core team. He has been an awesome contributor since joining the Zaqar team. And now he is the most active non-core contributor on Zaqar projects for the last 180 days[1]. Zhenyu has great technical expertise and contributed many high quality patches. I'm sure he would be an excellent addition to the team. If no one objects, I'll proceed and add him in a week from now. Thanks. [1] http://stackalytics.com/report/contribution/zaqar-group/180 -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Feb 27 09:45:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Feb 2018 09:45:16 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: On 2/26/2018 9:52 PM, John Griffith wrote: > ​Yeah, it seems like this would be pretty handy with what's there.  So > are folks good with that?  Wanted to make sure there's nothing > contentious there before I propose a spec on the Nova and Cinder sides. > If you think it seems at least worth proposing I'll work on it and get > something ready as a welcome home from Dublin gift for everyone :) I'll put it on the nova/cinder PTG etherpad agenda for Thursday morning. This seems like simple plumbing on the nova side, so not any major problems from me. -- Thanks, Matt From mbooth at redhat.com Tue Feb 27 10:02:57 2018 From: mbooth at redhat.com (Matthew Booth) Date: Tue, 27 Feb 2018 10:02:57 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: Couple of thoughts: Sounds like the work Nova will have to do is identical to volume update (swap volume). i.e. Change where a disk's backing store is without actually changing the disk. Multi-attach! There might be more than 1 instance per volume, and we can't currently support volume update for multi-attached volumes. Matt On 27 February 2018 at 09:45, Matt Riedemann wrote: > On 2/26/2018 9:52 PM, John Griffith wrote: > >> ​Yeah, it seems like this would be pretty handy with what's there. So >> are folks good with that? Wanted to make sure there's nothing contentious >> there before I propose a spec on the Nova and Cinder sides. If you think it >> seems at least worth proposing I'll work on it and get something ready as a >> welcome home from Dublin gift for everyone :) >> > > I'll put it on the nova/cinder PTG etherpad agenda for Thursday morning. > This seems like simple plumbing on the nova side, so not any major problems > from me. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Feb 27 10:25:53 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 27 Feb 2018 04:25:53 -0600 Subject: [openstack-dev] [cinder][ptg] Team Photo Rescheduled!!! Message-ID: <84be0c37-d459-8e5d-faff-4c66f83e8d75@gmail.com> Team, Some people had conflicts with the 12:50 time for our Team Photo due to lunch and presentations so I have decided to reschedule the photo. We are going to squeeze it in before the Nova/Cinder cross project meeting at 9:00 am on Thursday 3/1.  So, be punctual to get to the pitch on Thursday morning and bring a smile! Jay From eumel at arcor.de Tue Feb 27 10:32:50 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 27 Feb 2018 11:32:50 +0100 Subject: [openstack-dev] Zanata upgrade to version 4 Message-ID: Hello, the translation phase for Queens is just over. Many thanks for all that work during the cycle and translations in different OpenStack projects. We will take the chance now to upgrade our translation platform to a new version. We don't expect bigger issues because we tested the upgrade process anyway on translate-dev.openstack.org. Nevertheless the platform could be not available this week. So please note the hint. kind regards Frank (PTL I18n) From melwittt at gmail.com Tue Feb 27 10:34:06 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 27 Feb 2018 10:34:06 +0000 Subject: [openstack-dev] [nova][ptg] reminder: cyborg nova-interaction discussion at 2pm this afternoon Message-ID: Howdy everyone, This is just a reminder that we have some time scheduled to chat with the Cyborg team at 2pm this afternoon at the Cyborg room (Suite 665). From our PTG etherpad agenda[1] : "Cyborg (previously known as Nomad) is an OpenStack project that aims to provide a general purpose management framework for acceleration resources (i.e. various types of accelerators such as Crypto cards,GPU, FPGA, NVMe/NOF SSDs, ODP, DPDK/SPDK and so on). https://wiki.openstack.org/wiki/Cyborg” If you are interested, please join us! -melanie [1] https://etherpad.openstack.org/p/nova-ptg-rocky at L572 From melwittt at gmail.com Tue Feb 27 10:43:35 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 27 Feb 2018 10:43:35 +0000 Subject: [openstack-dev] [nova][ptg] team photo Thursday at 11:10 AM Message-ID: <8AE0FD41-0C5B-4C48-A0C6-102987CB693D@gmail.com> Hey everyone, We have time scheduled on Thursday morning 11:10-11:20 AM for a team photo at the PTG. For those in the Nova room (Davin Suite), we’ll walk together to the registration area before 11:10 AM to meet before we’ll be escorted down to the pitch (sports field) to take the photo. If you won’t be in the Nova room on Thursday morning, just meet us at the registration area at 11:10 AM. It will be really cold outside, so be prepared for that. Cheers, -melanie From lhinds at redhat.com Tue Feb 27 10:45:25 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 27 Feb 2018 10:45:25 +0000 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: Hi Kendall, Now the day has arrived, could you let us know about logistics..is there somewhere we should go to wait before being collected and heading to the pitch? Cheers, Luke On Wed, Feb 21, 2018 at 11:48 PM, Kendall Nelson wrote: > Hello Everyone! > > I just wanted to remind you all that you have till *Monday Feburary 26th* > to sign up if your team or group is interested in a team photo on the Croke > Park pitch! We still have slots available Tuesday afternoon and Thursday > morning. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > >> This link might work better for everyone: >> https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoT >> ypX66eNURsopQY/edit?usp=sharing >> >> -Kendall (diablo_rojo) >> >> >> On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson >> wrote: >> >>> Hello PTLs and SIG Chairs! >>> >>> So here's the deal, we have 50 spots that are first come, first >>> served. We have slots available before and after lunch both Tuesday and >>> Thursday. >>> >>> The google sheet here[1] should be set up so you have access to edit, >>> but if you can't for some reason just reply directly to me and I can add >>> your team to the list (I need team/sig name and contact email). >>> >>> I will be locking the google sheet on *Monday February 26th so I need >>> to know if your team is interested by then. * >>> >>> See you soon! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] https://docs.google.com/spreadsheets/d/ >>> 1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing >>> >>> >>> >>> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Feb 27 10:48:45 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 27 Feb 2018 11:48:45 +0100 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> Message-ID: <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> On 02/23/2018 09:29 AM, Xinni Ge wrote: > Hi there, > > We are aware of the javascript embedded issue, and working on it now, > the patch will be summited later. > > As for the unittest failure, we are still investigating it. We will > contant you as soon as we find out the cause. > > Sorry to bring troubles to you. We will be grateful if you could wait > for a little longer. > > Best Regards, > > Xinni Hi, Thanks for this message. This lowers the frustration! :) Let me know if there's any patch I could review. Cheers, Thomas Goirand (zigo) From mriedemos at gmail.com Tue Feb 27 10:55:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Feb 2018 10:55:01 +0000 Subject: [openstack-dev] [nova][ptg] team photo Thursday at 11:10 AM In-Reply-To: <8AE0FD41-0C5B-4C48-A0C6-102987CB693D@gmail.com> References: <8AE0FD41-0C5B-4C48-A0C6-102987CB693D@gmail.com> Message-ID: <376f1ad1-f252-a224-1034-7223677d68aa@gmail.com> On 2/27/2018 10:43 AM, melanie witt wrote: > It will be really cold outside, so be prepared for that. If you live in California, sure... -- Thanks, Matt From kendall at openstack.org Tue Feb 27 10:59:30 2018 From: kendall at openstack.org (Kendall Waters) Date: Tue, 27 Feb 2018 04:59:30 -0600 Subject: [openstack-dev] [PTL][SIG][PTG]Team Photos In-Reply-To: References: Message-ID: <201A8805-411A-40CD-8106-BE6876CE37F9@openstack.org> Hi Luke and everyone, Please meet at the registration desk on level 5. Make sure you and your team are on time so we can stay on schedule. For teams that are less than 20 people, we will be taking the pictures on the side of the pitch and for the teams that are over 20 people, the pictures will be taken in the stands. As a reminder, NO ONE is allowed on the grass so please do not step directly on the pitch when we take the pictures. Cheers, Kendall > On Feb 27, 2018, at 4:45 AM, Luke Hinds wrote: > > Hi Kendall, > > Now the day has arrived, could you let us know about logistics..is there somewhere we should go to wait before being collected and heading to the pitch? > > Cheers, > > Luke > > On Wed, Feb 21, 2018 at 11:48 PM, Kendall Nelson > wrote: > Hello Everyone! > > I just wanted to remind you all that you have till Monday Feburary 26th to sign up if your team or group is interested in a team photo on the Croke Park pitch! We still have slots available Tuesday afternoon and Thursday morning. > > -Kendall (diablo_rojo) > > On Thu, Feb 8, 2018 at 10:21 AM Kendall Nelson > wrote: > This link might work better for everyone: > https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > -Kendall (diablo_rojo) > > > On Wed, Feb 7, 2018 at 9:15 PM Kendall Nelson > wrote: > Hello PTLs and SIG Chairs! > > So here's the deal, we have 50 spots that are first come, first served. We have slots available before and after lunch both Tuesday and Thursday. > > The google sheet here[1] should be set up so you have access to edit, but if you can't for some reason just reply directly to me and I can add your team to the list (I need team/sig name and contact email). > > I will be locking the google sheet on Monday February 26th so I need to know if your team is interested by then. > > See you soon! > > - Kendall Nelson (diablo_rojo) > > [1] https://docs.google.com/spreadsheets/d/1J2MRdVQzSyakz9HgTHfwYPe49PaoTypX66eNURsopQY/edit?usp=sharing > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Feb 27 11:10:46 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 27 Feb 2018 11:10:46 +0000 Subject: [openstack-dev] [cyborg]reminder for team photo Message-ID: Just a kind reminder for team photo, our slot is 1:40, plz gather around the reg desk on time :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Feb 27 11:12:23 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 27 Feb 2018 11:12:23 +0000 Subject: [openstack-dev] [nova][ptg] reminder: cyborg nova-interaction discussion at 2pm this afternoon In-Reply-To: References: Message-ID: THX Melanie ! :) On Tue, Feb 27, 2018 at 10:34 AM, melanie witt wrote: > Howdy everyone, > > This is just a reminder that we have some time scheduled to chat with the > Cyborg team at 2pm this afternoon at the Cyborg room (Suite 665). > > From our PTG etherpad agenda[1] : > > "Cyborg (previously known as Nomad) is an OpenStack project that aims to > provide a general purpose management framework for acceleration resources > (i.e. various types of accelerators such as Crypto cards,GPU, FPGA, > NVMe/NOF SSDs, ODP, DPDK/SPDK and so on). > https://wiki.openstack.org/wiki/Cyborg” > > If you are interested, please join us! > > -melanie > > [1] https://etherpad.openstack.org/p/nova-ptg-rocky at L572 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 27 11:34:51 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 27 Feb 2018 11:34:51 +0000 Subject: [openstack-dev] [First Contact] [SIG] Rocky PTG Planning In-Reply-To: References: Message-ID: REMINDER! Our team photo is at 4:00 today. You need to be at Registration on Level 5 by 4:00 at the very latest. We will be escorted out to the pitch at that time and if you are late you will miss the opportunity! -Kendall (diablo_rojo) On Sun, Feb 25, 2018 at 2:51 PM Kendall Nelson wrote: > Hello! > > Can't wait to see you all tomorrow! > > I'm thinking we get started at 9:00? If you want to show up early and hang > out, the room (Canal Cafe) is available starting at 8:30 AM. > > -Kendall (diablo_rojo) > > > On Tue, 9 Jan 2018, 8:30 pm Kendall Nelson, wrote: > >> Hello Everyone :) >> >> I put us down for one day at the PTG and wanted to get a jump start on >> discussion planning. >> >> I created an etherpad[1] and wrote down some topics to get the ball >> rolling. Please feel free to expand on them if there are other details you >> feel we need to talk about or add new ones as you see fit. >> >> Also, please add your name to the 'Planned Attendance' section if you are >> thinking of attending. >> >> Thanks! >> >> -Kendall (diablo_rojo) >> >> [1] https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Tue Feb 27 12:58:39 2018 From: chris at openstack.org (Chris Hoge) Date: Tue, 27 Feb 2018 12:58:39 +0000 Subject: [openstack-dev] [loci] Removing deprecated project-specific Loci repositories Message-ID: <7F5AFB03-D8AD-4CAA-A71A-B06BD5D138E1@openstack.org> On October 17, 2017, the Loci team retired the project-specific Loci repositories in favor of a single repository. This was done to consolidate development and prevent the anti-pattern of one repository with duplicated code for every OpenStack project. After this five month deprecation period, in which we have provided no support for those repositories, and with all development focused on the primary Loci repository, we are officially requesting[1] that the project specific repositories be removed from OpenStack infra hosting. * Loci has no requirements synching * The project-specific repositories have no project gating. * We have zeroed out the project-specific repositories If you're interested in Loci, the primary repository and project remains active, and we encourage your use and contributions.[2] [1] https://review.openstack.org/#/c/548268/ [2] https://git.openstack.org/cgit/openstack/loci/ From gmann at ghanshyammann.com Tue Feb 27 14:03:00 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 27 Feb 2018 14:03:00 +0000 Subject: [openstack-dev] [QA][PTG] QA Dinner Night In-Reply-To: References: Message-ID: Hi All, Based on doodle vote, we finalize the QA Dinner on Wed night. I have not booked any restaurant yet so please suggest if you know any good place. ll put the time and meeting place soon. -gmann On Thu, Feb 22, 2018 at 2:11 PM, Ghanshyam Mann wrote: > Hi All, > > I'd like to propose a QA Dinner night for the people attending QA > sessions at the Dublin PTG. I initiated a doodle vote [1] to choose > the appropriate date. > > Please vote as per your availability. > > ..1 https://doodle.com/poll/t7phezrq25zrqzz3 > > -gmann From gmann at ghanshyammann.com Tue Feb 27 14:10:44 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 27 Feb 2018 14:10:44 +0000 Subject: [openstack-dev] [QA][PTG] QA Team photo @12:00-12.10 PM Thursday Message-ID: Hi All, QA team photo slot for PTG is scheduled on Thursday at 12.00. Please gather @5th floor reception before time, may be good to plan to meet @11.55 AM. See you all there!. -gmann From mriedemos at gmail.com Tue Feb 27 14:56:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Feb 2018 14:56:56 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> Message-ID: <7b15fdf1-02e5-14dd-16f9-a5c13d53439f@gmail.com> On 2/27/2018 10:02 AM, Matthew Booth wrote: > Sounds like the work Nova will have to do is identical to volume update > (swap volume). i.e. Change where a disk's backing store is without > actually changing the disk. That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the libvirt driver supports swap volume, but I assume all other virt drivers could support this generically. > > Multi-attach! There might be more than 1 instance per volume, and we > can't currently support volume update for multi-attached volumes. Good point - cinder would likely need to reject a request to replicate an in-use multiattach volume if the volume has more than one attachment. -- Thanks, Matt From gema at ggomez.me Tue Feb 27 16:20:13 2018 From: gema at ggomez.me (Gema Gomez) Date: Tue, 27 Feb 2018 16:20:13 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: <01020161d8107734-cec2dc0e-2be6-46fb-96cf-e373da456c73-000000@eu-west-1.amazonses.com> +1 On 26/02/18 19:39, Paul Bourke wrote: > Hey Kolla, > > Hope you're all enjoying Dublin so far :) Some have expressed interest > in getting together for a team meal, how does Thursday sound? Please > reply to this with +1/-1 and I can see about booking something. > > Cheers, > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From waboring at hemna.com Tue Feb 27 16:34:48 2018 From: waboring at hemna.com (Walter Boring) Date: Tue, 27 Feb 2018 16:34:48 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: <7b15fdf1-02e5-14dd-16f9-a5c13d53439f@gmail.com> References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> <7b15fdf1-02e5-14dd-16f9-a5c13d53439f@gmail.com> Message-ID: I think you might be able to get away with just calling os-brick's connect_volume again without the need to call disconnect_volume first. calling disconnect_volume wouldn't be good for volumes that are being used, just to refresh the connection_info on that volume. On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann wrote: > On 2/27/2018 10:02 AM, Matthew Booth wrote: > >> Sounds like the work Nova will have to do is identical to volume update >> (swap volume). i.e. Change where a disk's backing store is without actually >> changing the disk. >> > > That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the > libvirt driver supports swap volume, but I assume all other virt drivers > could support this generically. > > >> Multi-attach! There might be more than 1 instance per volume, and we >> can't currently support volume update for multi-attached volumes. >> > > Good point - cinder would likely need to reject a request to replicate an > in-use multiattach volume if the volume has more than one attachment. > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Feb 27 16:48:29 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 27 Feb 2018 16:48:29 +0000 Subject: [openstack-dev] [cyborg]Team Dinner 6:30pm at Croke Park Hotel In-Reply-To: References: Message-ID: Hi Team, I reserved a table for 8 at Sideline Bar in the Croke Park Hotel for team dinner . Look forward to meat you guys there :P -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Tue Feb 27 16:57:22 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 27 Feb 2018 16:57:22 +0000 Subject: [openstack-dev] [kolla][ptg] Team dinner In-Reply-To: References: Message-ID: Ok, Thursday seems good for the majority. Venue is 'Against the Grain' [0]. You can get the number 16 bus from near the Croke Park hotel, or we can just arrange to share a couple of taxis. See you all at the sessions tomorrow :) -Paul [0] https://goo.gl/maps/pddiUwnr67B2 On 26/02/18 19:39, Paul Bourke wrote: > Hey Kolla, > > Hope you're all enjoying Dublin so far :) Some have expressed interest > in getting together for a team meal, how does Thursday sound? Please > reply to this with +1/-1 and I can see about booking something. > > Cheers, > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From james.page at ubuntu.com Tue Feb 27 18:14:56 2018 From: james.page at ubuntu.com (James Page) Date: Tue, 27 Feb 2018 18:14:56 +0000 Subject: [openstack-dev] [charms] queens support release date Message-ID: Hi All We're not quite fully baked with Queens testing for the OpenStack charms for this week so we're going to push back a week to the 8th March to allow pre-commit functional testing updates to land. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.griffith8 at gmail.com Tue Feb 27 18:34:52 2018 From: john.griffith8 at gmail.com (John Griffith) Date: Tue, 27 Feb 2018 11:34:52 -0700 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> <7b15fdf1-02e5-14dd-16f9-a5c13d53439f@gmail.com> Message-ID: On Tue, Feb 27, 2018 at 9:34 AM, Walter Boring wrote: > I think you might be able to get away with just calling os-brick's > connect_volume again without the need to call disconnect_volume first. > calling disconnect_volume wouldn't be good for volumes that are being > used, just to refresh the connection_info on that volume. > ​Hmm... but then you'd have an orphaned connection left hanging around for the old connection no? ​ > > On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann > wrote: > >> On 2/27/2018 10:02 AM, Matthew Booth wrote: >> >>> Sounds like the work Nova will have to do is identical to volume update >>> (swap volume). i.e. Change where a disk's backing store is without actually >>> changing the disk. >>> >> >> That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the >> libvirt driver supports swap volume, but I assume all other virt drivers >> could support this generically. >> >> >>> Multi-attach! There might be more than 1 instance per volume, and we >>> can't currently support volume update for multi-attached volumes. >>> >> ​Not sure I follow... why not? It's just refreshing connections, only difference is you might have to do this "n" times instead of once?​ > >> Good point - cinder would likely need to reject a request to replicate an >> in-use multiattach volume if the volume has more than one attachment. > > ​So replication is set on create of the volume, you could have a rule that keeps the two features mutually exclusive, but I'm still not quite sure why that would be a requirement here. ​ > >> >> -- >> >> Thanks, >> >> Matt >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rocha.porto at gmail.com Tue Feb 27 20:53:11 2018 From: rocha.porto at gmail.com (Ricardo Rocha) Date: Tue, 27 Feb 2018 21:53:11 +0100 Subject: [openstack-dev] [magnum][keystone] clusters, trustees and projects In-Reply-To: <2119f48f-c2b6-0087-3ad0-0bd77b210cd5@gmail.com> References: <2119f48f-c2b6-0087-3ad0-0bd77b210cd5@gmail.com> Message-ID: Hi Lance. On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad wrote: > > > On 02/26/2018 10:17 AM, Ricardo Rocha wrote: >> Hi. >> >> We have an issue on the way Magnum uses keystone trusts. >> >> Magnum clusters are created in a given project using HEAT, and require >> a trust token to communicate back with OpenStack services - there is >> also integration with Kubernetes via a cloud provider. >> >> This trust belongs to a given user, not the project, so whenever we >> disable the user's account - for example when a user leaves the >> organization - the cluster becomes unhealthy as the trust is no longer >> valid. Given the token is available in the cluster nodes, accessible >> by users, a trust linked to a service account is also not a viable >> solution. >> >> Is there an existing alternative for this kind of use case? I guess >> what we might need is a trust that is linked to the project. > This was proposed in the original application credential specification > [0] [1]. The problem is that you're sharing an authentication mechanism > with multiple people when you associate it to the life cycle of a > project. When a user is deleted or removed from the project, nothing > would stop them from accessing OpenStack APIs if the application > credential or trust isn't rotated out. Even if the credential or trust > were scoped to the project's life cycle, it would need to be rotated out > and replaced when users come and go for the same reason. So it would > still be associated to the user life cycle, just indirectly. Otherwise > you're allowing unauthorized access to something that should be protected. > > If you're at the PTG - we will be having a session on application > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration > room [3]. Thanks for the reply, i now understand the issue. I'm not at the PTG. Had a look at the etherpad but it seems app credentials will have a similar lifecycle so not suitable for the use case above - for the same reasons you mention. I wonder what's the alternative to achieve what we need in Magnum? Cheers, Ricardo > [0] https://review.openstack.org/#/c/450415/ > [1] https://review.openstack.org/#/c/512505/ > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg > [3] http://ptg.openstack.org/ptg.html >> >> I believe the same issue would be there using application credentials, >> as the ownership is similar. >> >> Cheers, >> Ricardo >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From whayutin at redhat.com Tue Feb 27 21:12:15 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 27 Feb 2018 16:12:15 -0500 Subject: [openstack-dev] [ openstack-dev ][ tripleo ] unplanned outtage in RDO-Cloud In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 8:41 PM, David Moreau Simard wrote: > Please be wary of approving changes since the Third Party CI is out of > order until this is resolved. > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Fri, Feb 23, 2018 at 5:40 PM, Wesley Hayutin > wrote: > > > > > > On Fri, Feb 23, 2018 at 12:09 PM, Wesley Hayutin > > wrote: > >> > >> Greetings, > >> > >> The TripleO CI in RDO-Cloud has experienced an unplanned outage and is > >> down at this time. We will update this thread with more information > >> regarding when the CI will be brought back online as it becomes > available. > >> > >> > >> Thank you! > >> Wes Hayutin > > > > > > FYI.. > > The latest estimate for the unplanned outtage to TripleO-CI in RDO-Cloud > is > > that it will take a number of business days to resolve the issues. > > > > Thank you! > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > FYI.. First, thank you for your patience while the CI jobs in RDO Cloud were down. There are not a lot details to share regarding the outtage at this time, however services are being restored at this moment and we should see the 3rd party jobs and promotion jobs running shortly with some results rolling in by the morning. The effort to restore services involves several teams and takes a bit of coordination. I'm happy to report that all the teams involved have very dedicated engineers and we're making good progress. Thanks to David Manchado, David Simard and the TripleO CI squad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Tue Feb 27 21:19:37 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 27 Feb 2018 13:19:37 -0800 Subject: [openstack-dev] [congress] no team meeting this week 3/1 Message-ID: Cancelled for PTG From mriedemos at gmail.com Tue Feb 27 22:07:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Feb 2018 22:07:21 +0000 Subject: [openstack-dev] [cinder][nova] Update attachments on replication failover In-Reply-To: References: <5bd499af-5e1e-bf9c-8ff5-815cb8be543f@gmail.com> <7b15fdf1-02e5-14dd-16f9-a5c13d53439f@gmail.com> Message-ID: <57f67e78-25cb-6b0e-59c5-0144b7952b20@gmail.com> On 2/27/2018 6:34 PM, John Griffith wrote: > ​ So replication is set on create of the volume, you could have a rule > that keeps the two features mutually exclusive, but I'm still not quite > sure why that would be a requirement here.  ​ Yeah I didn't think of that either, the attachment record has the instance uuid in it right? So cinder could just iterate the list of attachments for the volume and send multiple requests to nova. -- Thanks, Matt From anlin.kong at gmail.com Tue Feb 27 23:09:39 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 28 Feb 2018 12:09:39 +1300 Subject: [openstack-dev] [qinling] project update - 4 Message-ID: This project update email is supposed to be sent regularly, but feel free to get in touch in #openstack-qinling irc channel anytime. - First, Qinling was approved as an official project at the end of Queens cycle, and will start its journey as an official project in Rocky development cycle, welcome any kind of contributions from the whole community - Qinling feature and bug tracking have been migrated from launchpad to StoryBoard[1], which is much more flexible for task tracking across multiple teams, multiple code repositories, or even multiple branches within those code repositories. - The Rocky task priorities have been sorted out, for anyone interested, please refer to https://storyboard.openstack.org/#!/worklist/251 - More additional Jenkins jobs were added to the project, such as doc, code coverage, release notes, etc. As usual, you can easily find previous emails below. [1]: https://storyboard.openstack.org/#!/project/927 Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Wed, Jan 24, 2018 at 12:12 AM Subject: [openstack-dev] [faas] [qinling] project update - 3 To: OpenStack Development Mailing List Hi, all This project update is posted bi-weekly, but feel free to get in touch in #openstack-qinling anytime. - Function package md5 check. This feature allows user specify the md5 checksum for the code package when creating the function, so the function package could be verified after downloading. If CLI is used, the md5 checksum will be calculated automatically. - Function webhook. The user can expose a function to 3rd party service(e.g. GitHub) by creating webhook so that the function can be invoked without authentication. - [CLI] Support to download function code package. BTW, maybe some of you already know that Qinling team is applying to become an OpenStack official project[1], feel free to leave your comments in the application, any feedback and questions are welcomed. As usual, you can easily find previous emails below. [1]: https://review.openstack.org/#/c/533827/ Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Mon, Jan 8, 2018 at 10:37 AM Subject: [openstack-dev] [faas] [qinling] project update - 2 To: OpenStack Development Mailing List Hi, all Happy new year! This project update is posted by-weekly, but feel free to get in touch in #openstack-qinling anytime. - Introduce etcd in qinling for distributed locking and storing the resources that need to be updated frequently. - Get function workers (admin only) - Support to detach function from underlying orchestrator (admin only) - Support positional args in users function - More unit tests and functional tests added - Powerful resource query filtering of qinling openstack CLI - Conveniently delete all executions of one or more functions in CLI You can find previous emails below. Have a good day :-) Cheers, Lingxian Kong (Larry) ---------- Forwarded message ---------- From: Lingxian Kong Date: Tue, Dec 12, 2017 at 10:18 PM Subject: [openstack-dev] [qinling] [faas] project update ​ - 1​ To: OpenStack Development Mailing List Hi, all Maybe there are aleady some people interested in faas implementation in openstack, and also deployed other openstack services to be integrated with (e.g. trigger function by object uploading in swift), Qinling is the thing you probably don't want to miss out. The main motivation I creatd Qinling project is from frequent requirements of our public cloud customers. For people who have not heard about Qinling before, please take a look at my presentation in Sydney Summit: https://youtu.be/NmCmOfRBlIU There is also a simple demo video: https://youtu.be/K2SiMZllN_A As the first project update email, I will just list the features implemented for now: - Python runtime - Sync/Async function execution - Job (invoke function on schedule) - Function defined in swift object storage service - Function defined in docker image - Easy to interact with openstack services in function - Function autoscaling based on request rate - RBAC operation - Function resource limitation - Simple documentation I will keep posting the project update by-weekly, but feel free to get in touch in #openstack-qinling anytime. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Tue Feb 27 23:19:14 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 28 Feb 2018 12:19:14 +1300 Subject: [openstack-dev] [qinling] Adding Hunt Xu to qinling core Message-ID: I'd like to add Hunt Xu to the qinling core team. As you know, Qinling project was just approved as an official openstack project, it's still very young and doesn't get a ton of activity or review, but Hunt Xu has been involving in the development and improvement for comparatively quite a while now: http://stackalytics.com/report/contribution/qinling/30 He's currently working on improving the tests which is important for Qinling at this stage (much appreciated!), we also need his vision and passion for the project which is definitely required to be a core reviewer. So unless there are objections, I'll plan on adding Hunt Xu to the qinling group this week. Cheers, Lingxian Kong (Larry) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lebre.adrien at free.fr Tue Feb 27 23:41:17 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Wed, 28 Feb 2018 00:41:17 +0100 (CET) Subject: [openstack-dev] [FEMDC] no meeting this week In-Reply-To: <1668581843.514226461.1519774769854.JavaMail.root@zimbra29-e5> Message-ID: <564856825.514227680.1519774877785.JavaMail.root@zimbra29-e5> Hi all, Most of us are attending the PTG. Next meeting should be held on March, the 14th. ad_ri3n_ From sxmatch1986 at gmail.com Wed Feb 28 02:55:46 2018 From: sxmatch1986 at gmail.com (hao wang) Date: Wed, 28 Feb 2018 10:55:46 +0800 Subject: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core In-Reply-To: References: Message-ID: +1, I'm glad to hear this, zhenyu is a great contributor in Zaqar team. Hope your great work in Rocky as well. 2018-02-27 17:22 GMT+08:00 Xiyuan Wang : > +1, zhenyu has done a lot of useful features in Zaqar. Such as delay queue > and message abstract support. Some others are on the list for Rocky as well. > Great work. > > 2018-02-26 1:38 GMT+00:00 Fei Long Wang : >> >> Hi team, >> >> I would like to propose adding Zhenyu Yang(yangzhenyu) for the Zaqar core >> team. He has been an awesome contributor since joining the Zaqar team. And >> now he is the most active non-core contributor on Zaqar projects for the >> last 180 days[1]. Zhenyu has great technical expertise and contributed many >> high quality patches. I'm sure he would be an excellent addition to the >> team. If no one objects, I'll proceed and add him in a week from now. >> Thanks. >> >> [1] http://stackalytics.com/report/contribution/zaqar-group/180 >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From HoangCX at vn.fujitsu.com Wed Feb 28 03:47:38 2018 From: HoangCX at vn.fujitsu.com (HoangCX at vn.fujitsu.com) Date: Wed, 28 Feb 2018 03:47:38 +0000 Subject: [openstack-dev] [neutron][vpnaas] drivers removal Message-ID: Hi, Following the announced information by Takashi Yamamoto [1]. I have proposed a patch to remove the following drivers [2]: - CiscoCsrIPsecDriver - FedoraStrongSwanDriver - VyattaIPsecDriver Those drivers are intended to be removed in Rocky. So, please check it and leave comment if you still need those drivers and plan to provide maintaining effort to the drivers. [1] http://lists.openstack.org/pipermail/openstack-dev/2017-July/120264.html [2] https://review.openstack.org/#/c/543394/ Best regards, Hoang From glongwave at gmail.com Wed Feb 28 05:27:16 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Wed, 28 Feb 2018 13:27:16 +0800 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime Message-ID: Hi ALL, TC approved the goal [0] a week ago , so it's time to finish the work. we also have a short discussion in oslo meeting at PTG, find more details in [1] , we use storyboard to check the goal in https://storyboard.openstack.org/#!/story/2001545. It's appreciated PTL set the owner in time . Feel free to reach me( gcb) in IRC if you have any questions. [0] https://review.openstack.org/#/c/534605/ [1] https://etherpad.openstack.org/p/oslo-ptg-rocky From line 175 -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcdkr at yandex.ru Wed Feb 28 05:55:04 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Wed, 28 Feb 2018 08:55:04 +0300 Subject: [openstack-dev] [oslo.db] oslo_db "max_retries" option Message-ID: <876451519797304@web34o.yandex.ru> Hi folks! I have a question about oslo_db "max_retries" option. https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L381 Why only DBConnectionError is considered as a reason for reconnecting here? Wouldn't it be a good idea to check for more general DBError? For example, DB host is down at the time of engine creation, but will become running some time later. --  Best regards, Vitalii Solodilov From gong.yongsheng at 99cloud.net Wed Feb 28 06:26:18 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Wed, 28 Feb 2018 14:26:18 +0800 (CST) Subject: [openstack-dev] [tacker] tacker project meeting time vote poll Message-ID: <6fca529f.805f.161db171311.Coremail.gong.yongsheng@99cloud.net> for more stackers to join tacker project meeting, we are voting new tacker project meeting time at: https://doodle.com/poll/59dwkpzp84gw9w45 if you are interested in tacker project, please join us. thanks yong sheng gong Tacker project team 99cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Wed Feb 28 06:30:10 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 28 Feb 2018 06:30:10 +0000 Subject: [openstack-dev] [vitrage] No IRC meeting today Message-ID: Hi, We will not hold the weekly IRC meeting today due to the PTG discussions. We will meet again next week, March 6th. BR, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From therve at redhat.com Wed Feb 28 09:11:02 2018 From: therve at redhat.com (Thomas Herve) Date: Wed, 28 Feb 2018 10:11:02 +0100 Subject: [openstack-dev] [Zaqar] Nominating yangzhenyu for Zaqar core In-Reply-To: References: Message-ID: On Mon, Feb 26, 2018 at 2:38 AM, Fei Long Wang wrote: > Hi team, > > I would like to propose adding Zhenyu Yang(yangzhenyu) for the Zaqar core team. He has been an awesome contributor since joining the Zaqar team. And now he is the most active non-core contributor on Zaqar projects for the last 180 days[1]. Zhenyu has great technical expertise and contributed many high quality patches. I'm sure he would be an excellent addition to the team. If no one objects, I'll proceed and add him in a week from now. Thanks. > > [1] http://stackalytics.com/report/contribution/zaqar-group/180 +1! -- Thomas From lhinds at redhat.com Wed Feb 28 09:25:16 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 28 Feb 2018 09:25:16 +0000 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Pino, Thank you for your time demonstrating Tatu. If you like we could incubate Tatu into the security SIG. This would mean no change to project structure / governance etc, its more the project gains a regular slot on our weekly meetings to help get patches reviewed and encourage other contributors / feedback etc. We did this with projects such as Bandit before, until it found its own legs and momentum. Cheers, Luke On Mon, Feb 12, 2018 at 8:45 AM, Luke Hinds wrote: > > > On Sun, Feb 11, 2018 at 4:01 PM, Pino de Candia < > giuseppe.decandia at gmail.com> wrote: > >> I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it >> from the slides. >> > > Thanks Pino , i added these to the agenda: > > https://etherpad.openstack.org/p/security-ptg-rocky > > Please let me know before the PTG, if it will be your colleague or if we > need to find a projector to conference you in. > > >> On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> Hi Folks, >>> >>> here are the slides for the Tatu presentation: https://docs.goo >>> gle.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM >>> >>> I meant to record the demo video as well but I haven't gotten around to >>> editing all the bits. Please stay tuned. >>> >>> thanks, >>> Pino >>> >>> >>> On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < >>> giuseppe.decandia at gmail.com> wrote: >>> >>>> Hi Luke, >>>> >>>> Fantastic! An hour would be great if the schedule allows - there are >>>> lots of different aspects we can dive into and potential future directions >>>> the project can take. >>>> >>>> thanks! >>>> Pino >>>> >>>> >>>> >>>> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >>>> >>>>> >>>>> >>>>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>>>> giuseppe.decandia at gmail.com> wrote: >>>>> >>>>>> Hi Folks, >>>>>> >>>>>> I know the request is very late, but I wasn't aware of this SIG until >>>>>> recently. Would it be possible to present a new project to the Security SIG >>>>>> at the PTG? I need about 30 minutes. I'm hoping to drum up interest in the >>>>>> project, sign on users and contributors and get feedback. >>>>>> >>>>>> For the past few months I have been working on a new project - Tatu >>>>>> [1]- to automate the management of SSH certificates (for both users and >>>>>> hosts) in OpenStack. Tatu allows users to generate SSH certificates with >>>>>> principals based on their Project role assignments, and VMs automatically >>>>>> set up their SSH host certificate (and related config) via Nova vendor >>>>>> data. The project also manages bastions and DNS entries so that users don't >>>>>> have to assign Floating IPs for SSH nor remember IP addresses. >>>>>> >>>>>> I have a working demo (including Horizon panels [2] and OpenStack CLI >>>>>> [3]), but am still working on the devstack script and patches [4] to get >>>>>> Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to post a >>>>>> demo video in the next few days. >>>>>> >>>>>> best regards, >>>>>> Pino >>>>>> >>>>>> >>>>>> References: >>>>>> >>>>>> 1. https://github.com/pinodeca/tatu (Please note this is still >>>>>> very much a work in progress, lots of TODOs in the code, very little >>>>>> testing and documentation doesn't reflect the latest design). >>>>>> 2. https://github.com/pinodeca/tatu-dashboard >>>>>> 3. https://github.com/pinodeca/python-tatuclient >>>>>> 4. https://review.openstack.org/#/q/tatu >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Hi Giuseppe, of course you can! I will add you to the agenda. We could >>>>> get your an hour if it allows more time for presenting and post discussion? >>>>> >>>>> We will be meeting in an allocated room on Monday (details to follow). >>>>> >>>>> https://etherpad.openstack.org/p/security-ptg-rocky >>>>> >>>>> Luke >>>>> >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young >>>>>>> wrote: >>>>>>> >>>>>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>>>>> Service catalog. >>>>>>>> >>>>>>> >>>>>>> Thanks Adam, will add it to the list. I see it's been open since >>>>>>> 2012! >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>>>>> >>>>>>>>> Are there any projects (new and old) that would like to make use >>>>>>>>> of the security SIG for either gaining another perspective on security >>>>>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>>>>> collaboration? >>>>>>>>> >>>>>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hello All, >>>>>>>>>> >>>>>>>>>> I am seeking topics for the PTG from all projects, as this will >>>>>>>>>> be where we try out are new form of being a SIG. >>>>>>>>>> >>>>>>>>>> For this PTG, we hope to facilitate more cross project >>>>>>>>>> collaboration topics now that we are a SIG, so if your project has a >>>>>>>>>> security need / problem / proposal than please do use the security SIG room >>>>>>>>>> where a larger audience may be present to help solve problems and gain >>>>>>>>>> x-project consensus. >>>>>>>>>> >>>>>>>>>> Please see our PTG planning pad [0] where I encourage you to add >>>>>>>>>> to the topics. >>>>>>>>>> >>>>>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Luke Hinds >>>>>>>>>> Security Project PTL >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ____________________________________________________________ >>>>>>>>> ______________ >>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>>> enstack.org?subject:unsubscribe >>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> ____________________________________________________________ >>>>>>>> ______________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>> enstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>> >>>> >>>> >>> >> > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Feb 28 09:36:41 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 28 Feb 2018 04:36:41 -0500 Subject: [openstack-dev] [infra][3rd party ci] Removal of jenkins user from DIB images Message-ID: <20180228093641.GA23722@localhost.localdomain> Greetings, As we move forward with using zuulv3 more and more and less and less of jenkins, we are continuing the clean up of our images. Specifically, if your 3rd party CI is still using jenkins please take note of the following changes[1]. By default our openstack-infra images will no longer be creating the jenkins user accounts however, other CI systems if jenkins user is still needed, you'll likely need to update your nodepool.yaml file and add jenkins-slave element directly. If you have issues, please join us in the #openstack-infra IRC channel on freenode. [1] https://review.openstack.org/514485/ From majopela at redhat.com Wed Feb 28 09:42:01 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Wed, 28 Feb 2018 09:42:01 +0000 Subject: [openstack-dev] [neutron] Increased port revisions on port creation after object engine facade patch In-Reply-To: References: Message-ID: On Mon, Feb 12, 2018 at 2:21 PM Ihar Hrachyshka wrote: > I would check how many commits are issued. If it's still one, there is > no issue as long as revision numbers are increasing. Otherwise, we can > take a look. BTW why do we discuss it here and not in upstream? Good point, I'm moving this to the openstack-dev list > Ihar > > On Mon, Feb 12, 2018 at 12:37 AM, Miguel Angel Ajo Pelayo > wrote: > > Hi folks :) > > > > We were talking this morning about the change for the new engine > facade > > in neutron [1], > > > > And we guess this could incur in overhead on the DB layer because if > we > > look at the corresponding networking-ovn change [2], we detected that the > > port revisions increases by 3 for port creation. > > > > We haven't looked at why, or how much overhead does it add to port > > creation. It'd be great to verify that we don't incur in much overhead, > or > > see if there is room for optimization. > > > > Best regards, > > > > > > > > [1] > > > https://github.com/openstack/neutron/commit/6f83466307fb21aee5bb596974644d457ae1fa60#diff-94eb611a8a3b29dbf8cd2aa2466a53b9R34 > > [2] > > > https://review.openstack.org/#/c/543166/3/networking_ovn/tests/functional/test_revision_numbers.py > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Feb 28 11:33:42 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 28 Feb 2018 05:33:42 -0600 Subject: [openstack-dev] [requirements][puppet] upper-constraints is now un-frozen Message-ID: <20180228113342.datkxbh47hpvvuxr@gentoo.org> With the release of Queens, master upper-constraints is now unlocked. The only projects that should be concerned about this are cycle trailing jobs that have not branched. The only project I'm aware of that has not branched is [puppet]. So just be aware that we are going to start moving on to rocky. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From megan at openstack.org Wed Feb 28 11:52:58 2018 From: megan at openstack.org (megan at openstack.org) Date: Wed, 28 Feb 2018 03:52:58 -0800 (PST) Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests Message-ID: <1519818778.086430998@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-02-28 at 10.31.12 AM.png Type: image/png Size: 109258 bytes Desc: not available URL: From liu.xuefeng1 at zte.com.cn Wed Feb 28 12:00:09 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Wed, 28 Feb 2018 20:00:09 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiBSZTogIFtaYXFhcl0gTm9taW5h?= =?utf-8?q?ting_yangzhenyu_for_Zaqar_core?= In-Reply-To: References: e3076edd-6535-0988-64c6-0f1dd476d160@catalyst.net.nz, CAOEh+o1N6iPxwnoysvVUkvmui4rzOn058V+6c20yyr+owO2pwQ@mail.gmail.com Message-ID: <201802282000094289136@zte.com.cn> KzEuIA0KDQoNClNlbmxpbiBoYXMgaW50ZXJnYXRlZCBaYXFhciBwcm9qZWN0IGFzIG9uZSBvZiBp dHMgcmVjZWl2ZXIgdHlwZSwgc28gSSBrbm93IHpoZW5neXUgaGFzIGRvbmUgbWFueSB1c2VmdWwg ZmVhdXRyZXMgaW4gWmFxYXIuDQoNCg0KDQoNCg0KDQpGb2xsb3dpbmcgaXMgYSBicmllZiBpbnRy b2R1Y3Rpb24gYWJvdXQgU2VubGluIHJlY2VpdmVycywgdGhlIG1lc3NhZ2luZyB0eXBlIHJlY2Vp dmVyIHdpbGwgdXNlIFphcWFyLg0KDQoNCmh0dHBzOi8vZG9jcy5vcGVuc3RhY2sub3JnL3Nlbmxp bi9sYXRlc3QvdHV0b3JpYWwvcmVjZWl2ZXJzLmh0bWwgDQoNCg0KaHR0cHM6Ly9kb2NzLm9wZW5z dGFjay5vcmcvc2VubGluL2xhdGVzdC91c2VyL3JlY2VpdmVycy5odG1sIA0KDQoNCmh0dHBzOi8v ZG9jcy5vcGVuc3RhY2sub3JnL3Nlbmxpbi9sYXRlc3Qvc2NlbmFyaW9zL2F1dG9zY2FsaW5nX2Nl aWxvbWV0ZXIuaHRtbCANCg0KDQoNCg0KDQoNCkJlc3QgUmVnYXJkcw0KDQoNClh1ZUZlbmcNCg0K DQoNCg0KDQoNCg0K5Y6f5aeL6YKu5Lu2DQoNCg0KDQrlj5Hku7bkurrvvJpoYW93YW5nIDxzeG1h dGNoMTk4NkBnbWFpbC5jb20+DQrmlLbku7bkurrvvJpPcGVuU3RhY2sgRGV2ZWxvcG1lbnQgTWFp bGluZyBMaXN0IChub3QgZm9yIHVzYWdlIHF1ZXN0aW9ucykgPG9wZW5zdGFjay1kZXZAbGlzdHMu b3BlbnN0YWNrLm9yZz4NCuaXpSDmnJ8g77yaMjAxOOW5tDAy5pyIMjjml6UgMTA6NTkNCuS4uyDp opgg77yaUmU6IFtvcGVuc3RhY2stZGV2XSBbWmFxYXJdIE5vbWluYXRpbmcgeWFuZ3poZW55dSBm b3IgWmFxYXIgY29yZQ0KDQoNCisxLCBJJ20gZ2xhZCB0byBoZWFyIHRoaXMsIHpoZW55dSBpcyBh IGdyZWF0IGNvbnRyaWJ1dG9yIGluIFphcWFyDQp0ZWFtLiBIb3BlIHlvdXIgZ3JlYXQgd29yayBp biBSb2NreSBhcyB3ZWxsLg0KDQoyMDE4LTAyLTI3IDE3OjIyIEdNVCswODowMCBYaXl1YW4gV2Fu ZyA8d2FuZ3hpeXVhbjEwMDdAZ21haWwuY29tPjoNCj4gKzEsIHpoZW55dSBoYXMgZG9uZSAgYSBs b3Qgb2YgdXNlZnVsIGZlYXR1cmVzIGluIFphcWFyLiBTdWNoIGFzIGRlbGF5IHF1ZXVlDQo+IGFu ZCBtZXNzYWdlIGFic3RyYWN0IHN1cHBvcnQuIFNvbWUgb3RoZXJzIGFyZSBvbiB0aGUgbGlzdCBm b3IgUm9ja3kgYXMgd2VsbC4NCj4gR3JlYXQgd29yay4NCj4NCj4gMjAxOC0wMi0yNiAxOjM4IEdN VCswMDowMCBGZWkgTG9uZyBXYW5nIDxmZWlsb25nQGNhdGFseXN0Lm5ldC5uej46DQo+Pg0KPj4g SGkgdGVhbSwNCj4+DQo+PiBJIHdvdWxkIGxpa2UgdG8gcHJvcG9zZSBhZGRpbmcgWmhlbnl1IFlh bmcoeWFuZ3poZW55dSkgZm9yIHRoZSBaYXFhciBjb3JlDQo+PiB0ZWFtLiBIZSBoYXMgYmVlbiBh biBhd2Vzb21lIGNvbnRyaWJ1dG9yIHNpbmNlIGpvaW5pbmcgdGhlIFphcWFyIHRlYW0uIEFuZA0K Pj4gbm93IGhlIGlzIHRoZSBtb3N0IGFjdGl2ZSBub24tY29yZSBjb250cmlidXRvciBvbiBaYXFh ciBwcm9qZWN0cyBmb3IgdGhlDQo+PiBsYXN0IDE4MCBkYXlzWzFdLiBaaGVueXUgaGFzIGdyZWF0 IHRlY2huaWNhbCBleHBlcnRpc2UgYW5kIGNvbnRyaWJ1dGVkIG1hbnkNCj4+IGhpZ2ggcXVhbGl0 eSBwYXRjaGVzLiBJJ20gc3VyZSBoZSB3b3VsZCBiZSBhbiBleGNlbGxlbnQgYWRkaXRpb24gdG8g dGhlDQo+PiB0ZWFtLiBJZiBubyBvbmUgb2JqZWN0cywgSSdsbCBwcm9jZWVkIGFuZCBhZGQgaGlt IGluIGEgd2VlayBmcm9tIG5vdy4NCj4+IFRoYW5rcy4NCj4+DQo+PiBbMV0gaHR0cDovL3N0YWNr YWx5dGljcy5jb20vcmVwb3J0L2NvbnRyaWJ1dGlvbi96YXFhci1ncm91cC8xODANCj4+DQo+PiAt LQ0KPj4gQ2hlZXJzICYgQmVzdCByZWdhcmRzLA0KPj4gRmVpbG9uZyBXYW5nICjnjovpo57pvpkp DQo+PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gU2VuaW9yIENsb3VkIFNvZnR3YXJlIEVuZ2luZWVy DQo+PiBUZWw6ICs2NC00ODAzMjI0Ng0KPj4gRW1haWw6IGZsd2FuZ0BjYXRhbHlzdC5uZXQubnoN Cj4+IENhdGFseXN0IElUIExpbWl0ZWQNCj4+IExldmVsIDYsIENhdGFseXN0IEhvdXNlLCAxNTAg V2lsbGlzIFN0cmVldCwgV2VsbGluZ3Rvbg0KPj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+DQo+Pg0K Pj4NCj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fDQo+PiBPcGVuU3RhY2sgRGV2ZWxvcG1lbnQgTWFpbGlu ZyBMaXN0IChub3QgZm9yIHVzYWdlIHF1ZXN0aW9ucykNCj4+IFVuc3Vic2NyaWJlOiBPcGVuU3Rh Y2stZGV2LXJlcXVlc3RAbGlzdHMub3BlbnN0YWNrLm9yZz9zdWJqZWN0OnVuc3Vic2NyaWJlDQo+ PiBodHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3Bl bnN0YWNrLWRldg0KPg0KPg0KPg0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBPcGVuU3RhY2sgRGV2 ZWxvcG1lbnQgTWFpbGluZyBMaXN0IChub3QgZm9yIHVzYWdlIHF1ZXN0aW9ucykNCj4gVW5zdWJz Y3JpYmU6IE9wZW5TdGFjay1kZXYtcmVxdWVzdEBsaXN0cy5vcGVuc3RhY2sub3JnP3N1YmplY3Q6 dW5zdWJzY3JpYmUNCj4gaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFu L2xpc3RpbmZvL29wZW5zdGFjay1kZXYNCj4NCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCk9wZW5TdGFj ayBEZXZlbG9wbWVudCBNYWlsaW5nIExpc3QgKG5vdCBmb3IgdXNhZ2UgcXVlc3Rpb25zKQ0KVW5z dWJzY3JpYmU6IE9wZW5TdGFjay1kZXYtcmVxdWVzdEBsaXN0cy5vcGVuc3RhY2sub3JnP3N1Ympl Y3Q6dW5zdWJzY3JpYmUNCmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1h bi9saXN0aW5mby9vcGVuc3RhY2stZGV2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Feb 28 12:40:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Feb 2018 12:40:01 +0000 Subject: [openstack-dev] [oslo.db] oslo_db "max_retries" option In-Reply-To: <876451519797304@web34o.yandex.ru> References: <876451519797304@web34o.yandex.ru> Message-ID: <15898ac5-5dab-9a9e-484b-9373d84319b9@gmail.com> On 2/28/2018 5:55 AM, Vitalii Solodilov wrote: > Wouldn't it be a good idea to check for more general DBError? So like catching Exception? How are you going to distinguish from IntegrityErrors which shouldn't be retried, which are also DBErrors? -- Thanks, Matt From gmann at ghanshyammann.com Wed Feb 28 12:44:06 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 28 Feb 2018 21:44:06 +0900 Subject: [openstack-dev] [QA] Meeting: CANCELED Message-ID: Hi All, Today QA meeting is canceled as we all are in PTG. Next QA meeting will be on 8th 08:00UTC. -gmann From amotoki at gmail.com Wed Feb 28 13:22:34 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 28 Feb 2018 13:22:34 +0000 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: Hi Gary, You are talking about vender extension support in OSC, but this is about python bindings. I believe this is another topic. Commands implemented in OSC repo already consumes OpenStack SDK, so the proposed change just increases the number of python bindings supported in SDK. Regarding the topic of third-party extension support after OSC transition, possible options in my mind is to keep "neutron" CLI, or to add some command which handle general network resources as neutronclient OSC plugin. It is a compromise considering the current neutron API. Note that OpenStackSDK proxy object supports keystoneauth.Adapter, so even after migrating to OpenStackSDK, this does not block arbitrary attributes immediately even though they are things we would like to avoid. If you and/or boden are interested in implementing so-called "generic network resource" commands in Neutron OSC plugin, I can help the effort (as it speed up neutron CLI deprecation). Akihiro 2018-02-26 13:39 GMT+00:00 Gary Kotton : > One of the concerns here is that the openstack client does not enable one to configure extensions that are not part of the core reference architecture. So any external third part that tries to have any etension added will not be able to leverage the openstack client. This is a major pain point. > > > On 2/26/18, 1:26 PM, "Monty Taylor" wrote: > > On 02/26/2018 10:55 AM, Rabi Mishra wrote: > > On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor > > wrote: > > > > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > > > > Hi neutron and openstacksdk team, > > > > This mail proposes to change the first priority of neutron-related > > python binding to OpenStack SDK rather than neutronclient python > > bindings. > > I think it is time to start this as OpenStack SDK became a official > > project in Queens. > > > > > > ++ > > > > > > [Current situations and problems] > > > > Network OSC commands are categorized into two parts: OSC and > > neutronclient OSC plugin. > > Commands implemented in OSC consumes OpenStack SDK > > and commands implemented as neutronclient OSC plugin consumes > > neutronclient python bindings. > > This brings tricky situation that some features are supported > > only in > > OpenStack SDK and some features are supported only in neutronclient > > python bindings. > > > > [Proposal] > > > > The proposal is to implement all neutron features in OpenStack > > SDK as > > the first citizen, > > and the neutronclient OSC plugin consumes corresponding > > OpenStack SDK APIs. > > > > Once this is achieved, users of OpenStack SDK users can see all > > network related features. > > > > [Migration plan] > > > > The migration starts from Rocky (if we agree). > > > > New features should be supported in OpenStack SDK and > > OSC/neutronclient OSC plugin as the first priority. If new feature > > depends on neutronclient python bindings, it can be implemented in > > neutornclient python bindings first and they are ported as part of > > existing feature transition. > > > > Existing features only supported in neutronclient python > > bindings are > > ported into OpenStack SDK, > > and neutronclient OSC plugin will consume them once they are > > implemented in OpenStack SDK. > > > > > > I think this is a great idea. We've got a bunch of good > > functional/integrations tests in the sdk gate as well that we can > > start running on neutron patches so that we don't lose cross-gating. > > > > [FAQ] > > > > 1. Will neutornclient python bindings be removed in future? > > > > Different from "neutron" CLI, as of now, there is no plan to > > drop the > > neutronclient python bindings. > > Not a small number of projects consumes it, so it will be > > maintained as-is. > > The only change is that new features are implemented in > > OpenStack SDK first and > > enhancements of neutronclient python bindings will be minimum. > > > > 2. Should projects that consume neutronclient python bindings switch > > to OpenStack SDK? > > > > Necessarily not. It depends on individual projects. > > Projects like nova that consumes small set of neutron features can > > continue to use neutronclient python bindings. > > Projects like horizon or heat that would like to support a wide > > range > > of features might be better to switch to OpenStack SDK. > > > > > > We've got a PTG session with Heat to discuss potential wider-use of > > SDK (and have been meaning to reach our to horizon as well) Perhaps > > a good first step would be to migrate the > > heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from > > neutronclient to SDK. > > > > > > Yeah, this would only be possible after openstacksdk supports all > > neutron features as mentioned in the proposal. > > ++ > > > Note: We had initially added the OpenStackSDKPlugin in heat to support > > neutron segments and were thinking of doing all new neutron stuff with > > openstacksdk. However, we soon realised that it's not possible when > > implementing neutron trunk support and had to drop the idea. > > Maybe we start converting one thing at a time and when we find something > sdk doesn't support we should be able to add it pretty quickly... which > should then also wind up improving the sdk layer. > > > There's already an > > heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in > > Heat. I started a patch to migrate senlin from senlinclient (which > > is just a thin wrapper around sdk): > > https://review.openstack.org/#/c/532680/ > > > > > > For those of you who are at the PTG, I'll be giving an update on SDK > > after lunch on Wednesday. I'd also be more than happy to come chat > > about this more in the neutron room if that's useful to anybody. > > > > Monty > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Regards, > > Rabi Mishra > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Wed Feb 28 13:44:44 2018 From: gkotton at vmware.com (Gary Kotton) Date: Wed, 28 Feb 2018 13:44:44 +0000 Subject: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK In-Reply-To: References: <47367b72-05d8-4cf3-7bcb-9d800273723e@inaugust.com> Message-ID: <905665BE-6B65-44A2-A904-51124F94B6E3@vmware.com> Thanks for the clarification. On 2/28/18, 3:22 PM, "Akihiro Motoki" wrote: Hi Gary, You are talking about vender extension support in OSC, but this is about python bindings. I believe this is another topic. Commands implemented in OSC repo already consumes OpenStack SDK, so the proposed change just increases the number of python bindings supported in SDK. Regarding the topic of third-party extension support after OSC transition, possible options in my mind is to keep "neutron" CLI, or to add some command which handle general network resources as neutronclient OSC plugin. It is a compromise considering the current neutron API. Note that OpenStackSDK proxy object supports keystoneauth.Adapter, so even after migrating to OpenStackSDK, this does not block arbitrary attributes immediately even though they are things we would like to avoid. If you and/or boden are interested in implementing so-called "generic network resource" commands in Neutron OSC plugin, I can help the effort (as it speed up neutron CLI deprecation). Akihiro 2018-02-26 13:39 GMT+00:00 Gary Kotton : > One of the concerns here is that the openstack client does not enable one to configure extensions that are not part of the core reference architecture. So any external third part that tries to have any etension added will not be able to leverage the openstack client. This is a major pain point. > > > On 2/26/18, 1:26 PM, "Monty Taylor" wrote: > > On 02/26/2018 10:55 AM, Rabi Mishra wrote: > > On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor > > wrote: > > > > On 02/26/2018 09:57 AM, Akihiro Motoki wrote: > > > > Hi neutron and openstacksdk team, > > > > This mail proposes to change the first priority of neutron-related > > python binding to OpenStack SDK rather than neutronclient python > > bindings. > > I think it is time to start this as OpenStack SDK became a official > > project in Queens. > > > > > > ++ > > > > > > [Current situations and problems] > > > > Network OSC commands are categorized into two parts: OSC and > > neutronclient OSC plugin. > > Commands implemented in OSC consumes OpenStack SDK > > and commands implemented as neutronclient OSC plugin consumes > > neutronclient python bindings. > > This brings tricky situation that some features are supported > > only in > > OpenStack SDK and some features are supported only in neutronclient > > python bindings. > > > > [Proposal] > > > > The proposal is to implement all neutron features in OpenStack > > SDK as > > the first citizen, > > and the neutronclient OSC plugin consumes corresponding > > OpenStack SDK APIs. > > > > Once this is achieved, users of OpenStack SDK users can see all > > network related features. > > > > [Migration plan] > > > > The migration starts from Rocky (if we agree). > > > > New features should be supported in OpenStack SDK and > > OSC/neutronclient OSC plugin as the first priority. If new feature > > depends on neutronclient python bindings, it can be implemented in > > neutornclient python bindings first and they are ported as part of > > existing feature transition. > > > > Existing features only supported in neutronclient python > > bindings are > > ported into OpenStack SDK, > > and neutronclient OSC plugin will consume them once they are > > implemented in OpenStack SDK. > > > > > > I think this is a great idea. We've got a bunch of good > > functional/integrations tests in the sdk gate as well that we can > > start running on neutron patches so that we don't lose cross-gating. > > > > [FAQ] > > > > 1. Will neutornclient python bindings be removed in future? > > > > Different from "neutron" CLI, as of now, there is no plan to > > drop the > > neutronclient python bindings. > > Not a small number of projects consumes it, so it will be > > maintained as-is. > > The only change is that new features are implemented in > > OpenStack SDK first and > > enhancements of neutronclient python bindings will be minimum. > > > > 2. Should projects that consume neutronclient python bindings switch > > to OpenStack SDK? > > > > Necessarily not. It depends on individual projects. > > Projects like nova that consumes small set of neutron features can > > continue to use neutronclient python bindings. > > Projects like horizon or heat that would like to support a wide > > range > > of features might be better to switch to OpenStack SDK. > > > > > > We've got a PTG session with Heat to discuss potential wider-use of > > SDK (and have been meaning to reach our to horizon as well) Perhaps > > a good first step would be to migrate the > > heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from > > neutronclient to SDK. > > > > > > Yeah, this would only be possible after openstacksdk supports all > > neutron features as mentioned in the proposal. > > ++ > > > Note: We had initially added the OpenStackSDKPlugin in heat to support > > neutron segments and were thinking of doing all new neutron stuff with > > openstacksdk. However, we soon realised that it's not possible when > > implementing neutron trunk support and had to drop the idea. > > Maybe we start converting one thing at a time and when we find something > sdk doesn't support we should be able to add it pretty quickly... which > should then also wind up improving the sdk layer. > > > There's already an > > heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in > > Heat. I started a patch to migrate senlin from senlinclient (which > > is just a thin wrapper around sdk): > > https://review.openstack.org/#/c/532680/ > > > > > > For those of you who are at the PTG, I'll be giving an update on SDK > > after lunch on Wednesday. I'd also be more than happy to come chat > > about this more in the neutron room if that's useful to anybody. > > > > Monty > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Regards, > > Rabi Mishra > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Greg.Waines at windriver.com Wed Feb 28 13:52:26 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 28 Feb 2018 13:52:26 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Message-ID: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dinesh.Bhor at nttdata.com Wed Feb 28 14:18:02 2018 From: Dinesh.Bhor at nttdata.com (Bhor, Dinesh) Date: Wed, 28 Feb 2018 14:18:02 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> Message-ID: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Feb 28 14:40:49 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 28 Feb 2018 09:40:49 -0500 Subject: [openstack-dev] [manila] no meeting March 1 Message-ID: <20180228144049.6caeu5w2aktupuzd@barron.net> Just a quick reminder that there will be *no* weekly manila team meeting Thursday March 1 since many folks are busy at PTG. Cheers, -- Tom -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Feb 28 15:05:47 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 28 Feb 2018 09:05:47 -0600 Subject: [openstack-dev] OpenStack Queens is officially released! Message-ID: <4933D6DC-E270-4406-BE5E-8749A8738AD8@gmx.com> Hello OpenStack community, I'm excited to announce the final releases for the components of OpenStack Queens, which conclude the Queens development cycle. You will find a complete list of all components, their latest versions, and links to individual project release notes documents listed on the new release site. https://releases.openstack.org/queens/ Congratulations to all of the teams who have contributed to this release! The Rocky cycle work is off to a good start this week with the Project Team Gathering in Dublin. Let's keep the momentum going! Thanks, Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Feb 28 16:36:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 28 Feb 2018 16:36:19 +0000 Subject: [openstack-dev] [ptl][release] missing queens release notes pages Message-ID: <1519835678-sup-5481@lrrr.local> We have quite a few projects that publish release notes but have not approved the patch to ensure their queens release notes are published properly. Please look for a patch from the proposal bot with the subject "Update reno for stable/queens", fix it if needed, and approve it quickly. Doug barbican no release notes page at https://docs.openstack.org/releasenotes/barbican/queens.html cloudkitty-dashboard no release notes page at https://docs.openstack.org/releasenotes/cloudkitty-dashboard/queens.html cloudkitty no release notes page at https://docs.openstack.org/releasenotes/cloudkitty/queens.html congress-dashboard no release notes page at https://docs.openstack.org/releasenotes/congress-dashboard/queens.html instack-undercloud no release notes page at https://docs.openstack.org/releasenotes/instack-undercloud/queens.html kolla-ansible no release notes page at https://docs.openstack.org/releasenotes/kolla-ansible/queens.html kolla no release notes page at https://docs.openstack.org/releasenotes/kolla/queens.html kuryr no release notes page at https://docs.openstack.org/releasenotes/kuryr/queens.html manila no release notes page at https://docs.openstack.org/releasenotes/manila/queens.html networking-ovn no release notes page at https://docs.openstack.org/releasenotes/networking-ovn/queens.html os-net-config no release notes page at https://docs.openstack.org/releasenotes/os-net-config/queens.html puppet-tripleo no release notes page at https://docs.openstack.org/releasenotes/puppet-tripleo/queens.html python-heatclient no release notes page at https://docs.openstack.org/releasenotes/python-heatclient/queens.html python-manilaclient no release notes page at https://docs.openstack.org/releasenotes/python-manilaclient/queens.html tempest no release notes page at https://docs.openstack.org/releasenotes/tempest/queens.html tripleo-common no release notes page at https://docs.openstack.org/releasenotes/tripleo-common/queens.html tripleo-heat-templates no release notes page at https://docs.openstack.org/releasenotes/tripleo-heat-templates/queens.html tripleo-image-elements no release notes page at https://docs.openstack.org/releasenotes/tripleo-image-elements/queens.html tripleo-puppet-elements no release notes page at https://docs.openstack.org/releasenotes/tripleo-puppet-elements/queens.html tripleo-ui no release notes page at https://docs.openstack.org/releasenotes/tripleo-ui/queens.html tripleo-validations no release notes page at https://docs.openstack.org/releasenotes/tripleo-validations/queens.html watcher-dashboard no release notes page at https://docs.openstack.org/releasenotes/watcher-dashboard/queens.html From Kaitlin.Farr at jhuapl.edu Wed Feb 28 22:13:41 2018 From: Kaitlin.Farr at jhuapl.edu (Farr, Kaitlin M.) Date: Wed, 28 Feb 2018 22:13:41 +0000 Subject: [openstack-dev] [barbican][castellan] Stepping down from core Message-ID: Hi Barbicaneers,   I will be moving on to other projects at work and will not have time to contribute to OpenStack anymore.  I am stepping down as core reviewer as I will not be able to maintain my responsibilities.  It's been a great 4.5 years working on OpenStack and a fulfilling 3 years as a Barbican core reviewer.   The recent growing interest in Castellan and Barbican for key management to support new security features is encouraging.  The rest of the Barbican team will do a great job managing Barbican, Castellan, and Castellan-UI.   If you have any pressing concerns or questions, you can still reach me by email.   Thanks so much, Kaitlin Farr From jungleboyj at gmail.com Wed Feb 28 22:58:51 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 28 Feb 2018 16:58:51 -0600 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... Message-ID: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> Team, Just a reminder that we will be having our team photo at 9 am tomorrow before the Cinder/Nova cross project session.  Please be at the registration desk before 9 to be in the photo. We will then have the Cross Project session in the Nova room as it sounds like it is somewhat larger.  I will have sound clips in hand to make sure things don't get too serious. Finally, an update on dinner for tomorrow night.  I have moved dinner to a closer venue: Fagan's Bar and Restaurant:  146 Drumcondra Rd Lower, Drumcondra, Dublin 9 I have reservations for 7:30 pm.  It isn't too difficult a walk from Croke Park (even in a blizzard) and it is a great pub. Thanks for a great day today! See you all tomorrow!  Let's make it a great one!  ;-) Jay From aspiers at suse.com Wed Feb 28 23:03:00 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 28 Feb 2018 23:03:00 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> Message-ID: <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh wrote: >Hi Greg, > > >We below are present: > > >Tushar Patil(tpatil) > >Yukinori Sagara(sagara) > >Abhishek Kekane(abhishekk) > >Dinesh Bhor(Dinesh_Bhor) > > >Thank you, > >Dinesh Bhor > > >________________________________ >From: Waines, Greg >Sent: 28 February 2018 19:22:26 >To: OpenStack Development Mailing List (not for usage questions) >Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? > > >Any masakari folks at the PTG this week ? > > > >Would be interested in meeting up and chatting, > >let me know, > >Greg. > >______________________________________________________________________ >Disclaimer: This email and any attachments are sent in strictest confidence >for the sole use of the addressee and may contain legally privileged, >confidential, and proprietary data. If you are not the intended recipient, >please advise the sender by replying promptly to this email and then delete >and destroy this email and any attachments without any further use, copying >or forwarding. >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev